Skip to content

Feed aggregator

Important Update from Telerik Regarding TeamPulse Licenses

Telerik TestStudio - Tue, 11/29/2016 - 05:00
TeamPulse users, please read this important update from Telerik regarding your TeamPulse licenses. 2016-01-11T19:48:06Z 2016-11-29T02:56:56Z Brandon Satrom
Categories: Companies

Shift Left: Testing in the Agile World

Software Testing Magazine - Mon, 11/28/2016 - 18:26
Performing testing in an Agile context require a completely different approach to software testing activities that is often named “Shift Left”. This term emphasize the move of software quality activities to the beginning of the software development life cycle. In this article, Kishan Sundar shares his perspective about the consequences for Scrum teams of shifting left software testing in Agile. Author: Kishan Sundar, Maveric Systems, http://maveric-systems.com/ In the digital world, constant innovation and upkeep is a day-to-day activity or a routine rather than a scheduled maintenance activity that disrupts the said routine. Innovation has been embraced and fueled by end-user expectations and every consumer is constantly seeking more from a bank, an online shopping website or even the internet provider’s complaints portal. With technologists constantly striving to enhance and better the experience for the demanding customers, the whole world has to grapple with incessant operation systems and device upgrades, feature enhancement and a number of other changes that might require a complete or at least partial overhaul of existing systems every month. To keep up with these whirling changes, businesses globally adopted Agile. Agile is an approach that involves development of a software in an incremental fashion, where one complete feature is developed in what is called a sprint, with testing being done within every sprint, and subsequent sprints being built on its foundation. Doing things the Agile way offers quick releases, enables defect prevention and most importantly, makes room for reassessment and subsequent realignment of plans. With this approach, [...]
Categories: Communities

Quality Software Australia 2017 Call for Speakers

Software Testing Magazine - Mon, 11/28/2016 - 16:00
The inaugural Quality Software Australia (QSA2017) conference will take place on May 10-12 2017 in Melbourne. You can participate as a presenter at this conference and you can choose between a verbal presentation or a workshop. Are you passionate about software quality and software testing and want to share it? A controversial idea, an interesting experience or something you have learned lately? Submit your abstract and get a chance to be part of a great team of speakers. If you live interstate or international, you may even be eligible for reimbursement on your travel expenses. The Quality Software Australia hopes to receive a broad range of submissions for the conference, so there is no set theme. Proposals based on personal experiences are encouraged. Get more information and apply on http://www.qualitysoftware.com.au/speak/
Categories: Communities

Now Live on DevOps Radio: Picture-Perfect CD, Featuring Dean Yu, Director, Release Engineering, Shutterfly

Jenkins World 2016 was buzzing with the latest in DevOps, CI/CD, automation and more. DevOps Radio wanted to capture some of that energy so we enlisted the help of Sacha Labourey, CEO at CloudBees, to host a series of episodes live at the event. We’re excited to present a new three-part series, DevOps Radio: Live at Jenkins World. This is episode two in the series.

Dean Yu, director of release engineering at Shutterfly, has been with the Jenkins community since before Jenkins was called Jenkins. Today, he’s a member of the Jenkins governance board and an expert in all things Jenkins and CI. He attended Jenkins World 2016 to catch up with the community, check out some sessions and sit down with Sacha Labourey for a special episode of DevOps Radio.

Sacha had a lot of questions for Dean, but the very first question he asked was, “What is new at Shutterfly?” Dean revealed how his team is using Jenkins, working on CI/CD and keeping pace with business during Shutterfly’s busiest season, the holidays. If you’re interested in learning CI/CD best practices or hearing what one Jenkins leader thinks about the future of software development and delivery, then you need to tune in today!

You don’t have to stop making your holiday card or photo book on Shutterfly.com, just plug in your headphone and tune into DevOps Radio. The latest DevOps Radio episode is available now on the CloudBees website and on iTunes.

Join the conversation about the episode on Twitter by tweeting to @CloudBees and including #DevOpsRadio in your post. After you listen, we want to know your thoughts. What did you think of this episode? What do you want to hear on DevOps Radio next? And, what’s on your holiday DevOps wishlist?

Sacha Labourey and Dean Yu talk about CD at Shutterfly, during Jenkins World 2016 (below).
P.S. Check out Dean’s massive coffee cup. It displays several pictures of his daughter and was created - naturally - on the Shutterfly website. 

 

 

 

 

 

 

 

 

Categories: Companies

What really is an Agile MVP?

There is often a bit of misunderstanding of what is an MVP (Minimum Viable Product) in an Agile context.  MVPs are meant to provide the minimal functionality or feature set that will be useful to customers.  However, to attempt to define the minimal set up front means that you know what the customer wants from the start.  How often do you know what the customer wants at the beginning?Instead think of an MVP as an opportunity to learn what the customer wants.  It should neither be fixed nor should you be certain of what it is.  Instead it should be considered an evolving concept from which you learn what the customer wants over time.  What mindset shifts might you have to make in order to adapt to what an MVP is in an Agile world? The first Agile mindset shift is that you should not define an MVP upfront in an Agile world.  Defining an MVP upfront is akin to big-up-front planning.  You can certainly hypothesize what the minimal set of features might be, but you must have a mindset and practices that have you validate your assumptions and hypothesis.  You can start with a vision or general idea of what might be minimal and valuable to the customer but the moment you attempt to succinctly define the set of features, you are not really following Agile and more egregious, you are doing a disservice to your customer.The second Agile mindset shift is that customer feedback is key to evolving the MVP.  If you want your MVP to align closely to customer value, you must include continuous customer feedback loops when working on an MVP.  These can take the form of customer demos or hands-on sessions.  Customer feedback can start as early as when you are hypothesizing what is an MVP and must be part of evolving the MVP to gain a strong inspect and adapt mindset with the inspect coming from the customer.  Eric Reis writes that an MVP “allows a team to collect the maximum amount of validated learning about customers with the least effort.”  Customer feedback is the cornerstone to validated learning. So who really determines what is the MVP?  If you think the answer is you, your management, or your team, then maybe its time to Reduce your certainty and Ready your mind with the Agile mindset, discovery mindset, and Feedback loops. The right answer is the customer determines what is the MVP in Agile.  The more closely you align with customers throughout the effort, the more likely you will have an MVP that is considered valuable to the customer.          
Categories: Blogs

The Well Rounded Architect

thekua.com@work - Sun, 11/27/2016 - 17:39

In this blog post, I explore the six different dimensions I covered in my recent talk at the O’Reilly Software Architecture conference in London called “The Well Rounded Architect.”

The elements of the well-rounded architect

The Well Rounded Architect

Acting as a Leader

Good software architects understand that their role as a leader is not necessarily telling developers what to do. Rather, good architects act like a guide, shepherding a team of developers towards the same technical vision drawing upon leadership skills such as story-telling, influencing, navigating conflict and building trust with individuals to turn their architectural vision into reality.

A good leader, and thus, a good architect, will listen carefully to the opinions of each contributor, fine-tuning their vision with feedback from the team. This leads well onto the next point.

Being a developer

Making good architectural choices is a function of balancing an ideal target architectural state with the current state of a software system. As an example, there is no sense in adding a document database to a system if the problem domain is better suited for a relational database, even if that’s boring. An architect may feel tempted to impose technologies or architectural choices without considering the fit for the problem space – AKA behaviours of the “ivory tower architect.”

The best way an architect can mitigate this is by spending time with developers and time in the code. Understanding how the system has been built up, and the constraints of the system as it stands today will give the architect more information about the right choices for today’s environment.

Having a systems focus

Seasoned developers know that code is only one aspect to working software. To make code run, a seasoned developer understands there are other important quality attributes necessary for code to run well in its production environment. They consider aspects like deployment processes, automated testing, performance, security, and supportability. Where developers may approach these quality attributes ad hoc, an architect will focus on understanding not just the code but also the quality attributes necessary to meet the many needs of different stakeholders such as support, security, and operations staff.

The good architect focuses on finding solutions that can satisfy as many of these different stakeholder needs instead of choosing a tool or approach optimised for the preferences or style of a single contributor.

Thinking like an entrepreneur

All technology choices have costs and benefits, and a good architect will consider new technology choices from both perspectives. Successful entrepreneurs are willing to take risks, but seek ways to learn quickly and fail fast. Architects can approach technology choices in a similar way, seeking real-world information about short- and long-term costs and the likely benefits they will realise.

A good example is when the architect avoids committing to a new tool based on reading a new article, or having heard about it at a conference. Instead they seek to understand how relevant the tool is in their environment by running an architectural spike to gather more information. They don’t pick a tool based on how good the sales pitch is, but what value it offers, given what they need for their system. They also look for the hidden costs of tools such as how well is a tool supported (e.g. level of documentation, community adoption), how much lock-in the tool brings or the extra risks it introduces over the long-term.

Balancing strategic with tactical thinking

A lot of teams build their software reactively with individual developers choosing tools and technologies that they are most comfortable with, or have the most experience with.

The good architect keeps an eye out for what newer technologies, tools or approaches might be useful but does not necessarily draw upon them immediately. Technology adoption requires a considered approach looking at a long-term horizon. Architects will seek for a good balance between agility (allowing the team to move fast) and alignment (keeping enough consistency) at both a team and organisational level.

An exercise like the Build your own Tech Radar is a useful tool to explore technologies with strategy in mind.

Communicating well

Architects know that effective communication is a key skill for building trust and influencing people outside of the team. They know that different groups of people use different vocabulary and that using the technical terms and descriptions with business people makes communication more difficult. Instead of talking about patterns, tools and programming concepts, the architect uses words their audience will be familiar with. Communicating technical choices to business people with words like risk, return, costs, and benefits will serve an architect better than the words they use with their development team.

An architect also realises that communicating within the team is just as important as outside, and will use diagrams and group discussions to establish and refine the technical vision, and use a written log like an Architectural Decision Log or a wiki to provide a historical trail for future generations.

Conclusion

Doing the job of a well-rounded architect is not easy. There are so many elements to focus us, each drawing upon many skills that a developer often doesn’t focus on practicing. What is most important is not necessarily the ability an architect has, but that they have enough expertise in each of these different areas to be effective. An architect who is skillful in only one of these six areas described above will not be as effective as an architect who has a good level of expertise in all of them.

Categories: Blogs

Tester or Leborer ?

Thinking Tester - Sun, 11/27/2016 - 16:17
A friend of mine sent a link to this article on PMP and Project Managers that brings out an aspect of our profession - testing so beautifully. Are we knowledge workers paid for our expertise or laborers?

How does that Stuart is saying about PMP and Project management applies to Testing? I believe, more than certification, testing profession is hit by the way we poorly define testing and adopt a model of testing that eliminates need for skill, focuses on mindless repetition of some documented procedures.

Time to reflect on. If we define and accept that definition of testing that systematically undermines skill element and focuses on process, tools, metrics etc - there is no doubt that we will become laborers.

Is testing rule based?

How much of good testing is rule based?

Rethinking Equivalence Class Partitioning, Part 1

James Bach's Blog - Sun, 11/27/2016 - 14:41

Wikipedia’s article on equivalence class partitioning (ECP) is a great example of the poor thinking and teaching and writing that often passes for wisdom in the testing field. It’s narrow and misleading, serving to imply that testing is some little game we play with our software, rather than an open investigation of a complex phenomenon.

(No, I’m not going to edit that article. I don’t find it fun or rewarding to offer my expertise in return for arguments with anonymous amateurs. Wikipedia is important because it serves as a nearly universal reference point when criticizing popular knowledge, but just like popular knowledge itself, it is not fixable. The populus will always prevail, and the populus is not very thoughtful.)

In this article I will comment on the Wikipedia post. In a subsequent post I will describe ECP my way, and you can decide for yourself if that is better than Wikipedia.

“Equivalence partitioning or equivalence class partitioning (ECP)[1] is a software testing technique that divides the input data of a software unit into partitions of equivalent data from which test cases can be derived.”

Not exactly. There’s no reason why ECP should be limited to “input data” as such. The ECP thought process may be applied to output, or even versions of products, test environments, or test cases themselves. ECP applies to anything you might be considering to do that involves any variations that may influence the outcome of a test.

Yes, ECP is a technique, but a better word for it is “heuristic.” A heuristic is a fallible method of solving a problem. ECP is extremely fallible, and yet useful.

“In principle, test cases are designed to cover each partition at least once. This technique tries to define test cases that uncover classes of errors, thereby reducing the total number of test cases that must be developed.”

This text is pretty good. Note the phrase “In principle” and the use of the word “tries.” These are softening words, which are important because ECP is a heuristic, not an algorithm.

Speaking in terms of “test cases that must be developed,” however, is a misleading way to discuss testing. Testing is not about creating test cases. It is for damn sure not about the number of test cases you create. Testing is about performing experiments. And the totality of experimentation goes far beyond such questions as “what test case should I develop next?” The text should instead say “reducing test effort.”

“An advantage of this approach is reduction in the time required for testing a software due to lesser number of test cases.”

Sorry, no. The advantage of ECP is not in reducing the number of test cases. Nor is it even about reducing test effort, as such (even though it is true that ECP is “trying” to reduce test effort). ECP is just a way to systematically guess where the bigger bugs probably are, which helps you focus your efforts. ECP is a prioritization technique. It also helps you explain and defend those choices. Better prioritization does not, by itself, allow you to test with less effort, but we do want to stumble into the big bugs sooner rather than later. And we want to stumble into them with more purpose and less stumbling. And if we do that well, we will feel comfortable spending less effort on the testing. Reducing effort is really a side effect of ECP.

“Equivalence partitioning is typically applied to the inputs of a tested component, but may be applied to the outputs in rare cases. The equivalence partitions are usually derived from the requirements specification for input attributes that influence the processing of the test object.”

Typically? Usually? Has this writer done any sort of research that would substantiate that? No.

ECP is a process that we all do informally, not only in testing but in our daily lives. When you push open a door, do you consciously decide to push on a specific square centimeter of the metal push plate? No, you don’t. You know that for most doors it doesn’t matter where you push. All pushable places are more or less equivalent. That is ECP! We apply ECP to anything that we interact with.

Yes, we apply it to output. And yes, we can think of equivalence classes based on specifications, but we also think of them based on all other learning we do about the software. We perform ECP based on all that we know. If what we know is wrong (for instance if there are unexpected bugs) then our equivalence classes will also be wrong. But that’s okay, if you understand that ECP is a heuristic and not a golden ticket to perfect testing.

“The fundamental concept of ECP comes from equivalence class which in turn comes from equivalence relation. A software system is in effect a computable function implemented as an algorithm in some implementation programming language. Given an input test vector some instructions of that algorithm get covered, ( see code coverage for details ) others do not…”

At this point the article becomes Computer Science propaganda. This is why we can’t have nice things in testing: as soon as the CS people get hold of it, they turn it into a little logic game for gifted kids, rather than a pursuit worthy of adults charged with discovering important problems in technology before it’s too late.

The fundamental concept of ECP has nothing to do with computer science or computability. It has to do with logic. Logic predates computers. An equivalence class is simply a set. It is a set of things that share some property. The property of interest in ECP is utility for exploring a particular product risk. In other words, an equivalence class in testing is an assertion that any member of that particular group of things would be more or less equally able to reveal a particular kind of bug if it were employed in a particular kind of test.

If I define a “test condition” as something about a product or its environment that could be examined in a test, then I can define equivalence classes like this: An equivalence class is a set of tests or test conditions that are equivalent with respect to a particular product risk, in a particular context. 

This implies that two inputs which are not equivalent for the purposes of one kind of bug may be equivalent for finding another kind of bug. It also implies that if we model a product incorrectly, we will also be unable to know the true equivalence classes. Actually, considering that bugs come in all shapes and sizes, to have the perfectly correct set of equivalence classes would be the same as knowing, without having tested, where all the bugs in the product are. This is because ECP is based on guessing what kind of bugs are in the product.

If you read the technical stuff about Computer Science in the Wikipedia article, you will see that the author has decided that two inputs which cover the same code are therefore equivalent for bug finding purposes. But this is not remotely true! This is a fantasy propagated by people who I suspect have never tested anything that mattered. Off the top of my head, code-coverage-as-gold-standard ignores performance bugs, requirements bugs, usability bugs, data type bugs, security bugs, and integration bugs. Imagine two tests that cover the same code, and both involve input that is displayed on the screen, except that one includes an input which is so long that when it prints it goes off the edge of the screen. This is a bug that the short input didn’t find, even though both inputs are “valid” and “do the same thing” functionally.

The Fundamental Problem With Most Testing Advice Is…

The problem with most testing advice is that it is either uncritical folklore that falls apart as soon as you examine it, or else it is misplaced formalism that doesn’t apply to realistic open-ended problems. Testing advice is better when it is grounded in a general systems perspective as well as a social science perspective. Both of these perspectives understand and use heuristics. ECP is a powerful, ubiquitous, and rather simple heuristic, whose utility comes from and is limited by your mental model of the product. In my next post, I will walk through an example of how I use it in real life.

Categories: Blogs

CITCON in New York City

Integrating the world….continuously - Sat, 11/26/2016 - 21:14

I am very excited that we will be hosting CITCON in New York City on December 9 & 10, 2016.

Registrations are still open: http://citconf.com/newyork2016/

I am proud that my company, Intent Media https://intentmedia.com/, has signed on as the Venue Sponsor. As Chief Technology Officer, I am excited to showcase some of the great things we have been doing at Intent like

* mob programming
* serverless architectures
* employee growth based management
* continuous delivery
* polyglot programming

Should be tons of fun! Join us!

Categories: Blogs

Software Quality Conference, Amersfoort, Netherlands, March 23 2017

Software Testing Magazine - Thu, 11/24/2016 - 08:00
Software Quality Conference is a one-day event focused on software testing that takes place in Utrecht, Netherlands. It aimed at expanding your insights on agile, testing and devops to improve your software reliability. All the talks are in Dutch. In the agenda of Software Quality Conference you can find topics like “Automated Testing as a success factor in Agile”, “How do I develop secure software in an era full of high expectations”, “Agile practices for accelerating business innovation”, “Software quality by project quality”, “Walking in narrow shoes – when Agile doesn’t fit?”, “Investigating critical software failures – Houston we have a problem”, “Testing apps the crowdsourced way”. Web site: http://softwarequality.heliview.nl/ Location for the Software Quality Conference: De Prodentfabriek, Oude Fabriekstraat 20, 3812 NR Amersfoort
Categories: Communities

Cambridge Lean Coffee

Hiccupps - James Thomas - Thu, 11/24/2016 - 07:23

This month's Lean Coffee was hosted by Abcam. Here's some brief, aggregated comments and questions  on topics covered by the group I was in.

Suggest techniques for identifying and managing risk on an integration project.
  • Consider the risk in your product, risk in third-party products, risk in the integration
  • Consider what kinds of risk your stakeholders care about; and to who (e.g. risk to the bottom line, customer data, sales, team morale ...)
  • ... your risk-assessment and mitigation strategies may be different for each
  • Consider mitigating risk in your own product, or in those you are integrating with
  • Consider hazards and harms
  • Hazards are things that pose some kind of risk (objects and behaviours, e.g. a delete button, and corruption of database)
  • Harms are the effects those hazards might have (e.g. deleting unexpected content, and serving incomplete results)
  • Consider probabilities and impacts of each harm, to provide a way to compare them
  • Advocate for the resources that you think you need 
  • ... and explain what you won't (be able to) do without them
  • Take a bigger view than a single tester alone can provide
  • ... perhaps something like the Three Amigos (and other stakeholders)
  • Consider what you can do in future to mitigate these kinds of risks earlier
  • Categorise the issues you've found already; they are evidence for areas of the product that may be riskier
  • ... or might show that your test strategy is biased
  • Remember that the stuff you don't know you don't know is a potential risk too: should you ask for time to investigate that?

Didn't get time to discuss some of my own interests: How-abouts and What-ifs, and Not Sure About Uncertainty.

Can templates be used to generate tests?
  • Some programming languages have templates for generating code 
  • ... can the same idea apply to tests?
  • The aim is to code tests faster; there is a lot of boilerplate code (in the project being discussed)
  • How would a template know what the inputs and expectations are?
  • Automation is checking rather than testing
  • Consider data-driven testing and QuickCheck
  • Consider asking for testability in the product to make writing test code easier (if you are spending time reverse-engineering the product in order to test it)
  • ... e.g. ask for consistent Ids of objects in and across web pages
  • Could this (perceived) problem be alleviated by factoring out the boilerplate code?

How can the coverage of manual and automated testing be compared?
  • Code coverage tools could, in principle, give some idea of coverage
  • ... but they have known drawbacks
  • ... and it might be hard to tie particular tester activity to particular paths through the code to understand where overlap exists
  • Tagging test cases with e.g. story identifiers can help to track where coverage has been added (but not what the coverage is)
  • What do we really mean by coverage?
  • What's the purpose of the exercise? To retire manual tests?
  • One participant is trying to switch to test automation for regression testing
  • ... but finding it hard to have confidence in the automation
  • ... because of the things that testers can naturally see around whatever they are looking at, that the automation does not give

What are the pros and cons of being the sole tester on a project?
  • Chance to take responsibility, build experience ... but can be challenging if the tester is not ready for that
  • Chance to make processes etc that works for you ... but perhaps there are efficiencies in sharing process too
  • Chance to own your work ... but miss out on other perspectives
  • Chance to express yourself ... but can feel lonely
  • Could try all testers on all projects (e.g. to help when people are on holiday or sick)
  • ... but this is potentially expensive and people complain about being thinly sliced
  • Could try sharing testing across the project team (if an issue is that there's insufficient resource for the testing planned)
  • Could set up sharing structures, e.g. team standup, peer reviews/debriefs, or pair testing across projects

What do (these) testers want from a test manager?
  • Clear product strategy
  • As much certainty as possible
  • Allow and encourage learning
  • Allow and encourage contact with testers from outside the organisation
  • Recognition that testers are different and have different needs
  • Be approachable
  • Give advice based on experience
  • Work with the tester 
  • ... e.g. coaching, debriefing, pointing out potential efficiency, productivity, testing improvements
  • Show appreciation
  • Must have been a tester
Image: https://flic.kr/p/bumiPG
Categories: Blogs

Protected: Docker Compose Deployments for the Enterprise

IBM UrbanCode - Release And Deploy - Wed, 11/23/2016 - 17:57

This content is password protected. To view it please enter your password below:

Password:

Categories: Companies

Webinar Recording and Q&A : What’s New in TestTrack 2016.1

The Seapine View - Wed, 11/23/2016 - 15:30

Thanks to everyone who attended the TestTrack 2016.1 Sneak Peek webinar last week. The webinar recording is now available if you weren’t able to attend or if you would like to watch it again. The Q&A from the webinar follows.


Q&A

When will the release be available?

TestTrack 2016.1 is expected to be ready to ship December 19, 2016.

Can attachments be transferred to JIRA?

Not in the 2016.1 release. But once you’ve created a JIRA issue in TestTrack, it’s easy to open the issue in JIRA and add additional information.

Is an extra license required to use the JIRA integration?

No. There is no additional TestTrack license required to use the JIRA integration. Your users will, of course, need a JIRA license.

Can I attach JIRA issues to requirements?

Yes. JIRA issues can be attached to any type of TestTrack item.

Is there a JIRA add-on?

Not yet. Look for this is a future release!

Categories: Companies

From 0 To DevOps in 80 Days: The Dynatrace Transformation Story!

Market disruption can spark innovation and radical change, and DevOps — as a set of best practices — has emerged from software industry disruptions. Why? Because, over the years, delivering software in many organizations has become harder, slower and more error prone. Outdated technology became a disadvantage for older, established companies competing against startups without years of […]

The post From 0 To DevOps in 80 Days: The Dynatrace Transformation Story! appeared first on about:performance.

Categories: Companies

Come meet the storm in the cloud at AWS re:Invent Las Vegas 2016

HP LoadRunner and Performance Center Blog - Tue, 11/22/2016 - 19:31

AWs re Invent teaser.png

AWS re:Invent will be heating up Las Vegas next week. Keep reading to learn how to connect with our team and have some fun.

Categories: Companies

Parasoft Continuous Testing within Microsoft Visual Studio

Software Testing Magazine - Tue, 11/22/2016 - 18:41
Parasoft has announced that its industry-leading service virtualization technology, Parasoft Virtualize, is now available on the Microsoft Azure Marketplace and Microsoft Visual Studio Team System (VSTS) Marketplace. Service virtualization (an extension of “API Virtualization”) helps organizations accelerate time to market without compromising quality. Especially in Agile environments, dependent components (e.g., APIs, 3rd-party services, databases, mainframes, etc.) connected to the application under test are not readily accessible for development and testing because they are still evolving, beyond a team’s control, or too costly/complex to configure in a test lab. With service virtualization, testing and development can proceed without waiting for access to the actual dependent components. The combination of Microsoft Azure, Microsoft VSTS, and Parasoft Service Virtualization — operating natively within the Microsoft environment — is designed to provide teams the rapid, scalable, and flexible test environment access required for Agile, DevOps, and “Continuous Everything.”
Categories: Communities

Perforce Acquires Seapine Software

Software Testing Magazine - Tue, 11/22/2016 - 18:32
Perforce Software, a vendor of version control and source code management, has announced the acquisition of Seapine Software, a provider of application lifecycle management (ALM) solutions. This acquisition expands the Perforce portfolio of developer and designer tools beyond enterprise class version management and code review. It will provides its customers with additional capabilities across the development pipeline. Seapine is the vendor of the TestTrack tool and the QA Wizard Pro functional, stress, and load testing tool. Founded in 1995, Seapine has headquarters in Cincinnati, Ohio, and offices in Europe, Asia-Pacific, and Africa with over 8,500 customers worldwide.
Categories: Communities

Testing Safety in IoT and Embedded Software

Testing TV - Tue, 11/22/2016 - 17:43
Software is being embedded in more and more devices. IOT is poised to become the growth area for software and testing with billions of devices and new software connected to the internet. IoT is the intersection of embedded, mobile, communications, and traditional software environments in combination with user demands. Embedded devices face new challenges in […]
Categories: Blogs

Evaluating Test Cases Quality With Mutation Testing

Software Testing Magazine - Tue, 11/22/2016 - 16:38
How good are your test cases? Maybe they are good, or maybe you need to add some new ones. How to tell? You can measure things like code coverage to check if some parts of your code were not executed. Still, this does not tell you anything about the quality of your assertions and your software testing results. In the extreme case, a test suite with no assertions might still achieve 100% code coverage, although being of questionable value. Mutation testing is a technique that is used to automatically inject faults in your code, and then check if your test cases can “kill” those “mutants”. In this way, the quality and effectiveness of your assertions can be evaluated. This talk presents the benefits and challenges of mutation testing, and also shows how to use open source tools like PIT. Video producer: http://oredev.org/
Categories: Communities

Are you the Signal or the Noise?

PractiTest - Tue, 11/22/2016 - 16:25

There is an ongoing theme around Signal vs Noise in the organization, that originates from the Signal to Noise ratio in radio signals, and tells us about the problem of having too much noise hampering the effectivity of the signal.phaser spectrogram

I think many testers should try to understand this principle (the organizational one, and not the radio properties one) and apply it to the way they manage and communicate their testing.

In a nutshell, you need to separate between the Important SIGNAL and the Irrelevant NOISE.  Understand where to focus your efforts and more important what to communicate to the rest of the world.

The problem is that many times, when we communicate with the the people in our company, we do not stop to think what information is important and relevant to them (this is the SIGNAL) and what stuff is not relevant or important to them (this is the NOISE).

gold_panningThen we write a large report or even come to a meeting and start reciting everything in a big blob of information, that surely includes some gold nuggets, but are completely lost to the flow of irrelevant stuff.

Some things to keep in mind that will make a big difference

Always understand what is important to the people you are talking to.  This will depend on who you are talking to, what part of the project we are in, and what has been happening around the project at this time.

When you realize there is something important to communicate, don’t start screaming to every direction and in all channels.  Look for the most appropriate person who needs to take action.  If you don’t have access to this person, look for someone who may help you to pass this information along.

Always focus on the quality of the information and not on the quantity.

When you feel you need to write a large report please do it, but put the important stuff at the beginning and make it clear when you start talking or writing about less important stuff.

quiet pictogramIf there is nothing to say, keep quiet.  If there is something to say, say it Sharply.

Whenever you want to propose something, go to the person who will be interested in hearing your proposal.

Bottom line

Many starting testers and some starting team leads feel they need to ensure their value by generating tons of data and large reports.  This may be true in some organizations, but it is the opposite in most (and definitely on those I like to work in)

Don’t try to look professional with tons of Noise, be the one that provides the Intelligent Signal and helps sail the ship in this environment of foggy noise and uncertainty.

For more on Communication skills and best practices for testers I recommend also reading:

The post Are you the Signal or the Noise? appeared first on QA Intelligence.

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today