Skip to content

Feed aggregator

Floating Point Quality: Less Floaty, More Pointed

James Bach's Blog - Tue, 06/20/2017 - 20:14

Years ago I sat next to the Numerics Test Team at Apple Computer. I teased them one day about how they had it easy: no user interface to worry about; a stateless world; perfectly predictable outcomes. The test lead just heaved a sigh and launched into a rant about how numerics testing is actually rather complicated and brimming with unexpected ambiguities. Apparently, there are many ways to interpret the IEEE floating point standard and learned people are not in agreement about how to do it. Implementing floating point arithmetic on a digital platform is a matter of tradeoffs between accuracy and performance. And don’t get them started about HP… apparently HP calculators had certain calculation bugs that the scientific community had grown used to. So the Apple guys had to duplicate the bugs in order to be considered “correct.”

Among the reasons why floating point is a problem for digital systems is that digital arithmetic is discrete and finite, whereas real numbers often are not. As my colleague Alan Jorgensen says “This problem arises because computers do not represent some real numbers accurately. Just as we need a special notation to record one divided by three as a decimal fraction: 0.33333…., computers do not accurately represent one divided by ten. This has caused serious financial problems and, in at least one documented instance, death.”

Anyway, Alan just patented a process that addresses this problem “by computing two limits (bounds) containing the represented real number that are carried through successive calculations.  When the result is no longer sufficiently accurate the result is so marked, as are further calculations using that value.  It is fail-safe and performs in real time.  It can operate in conjunction with existing hardware and software.  Conversion between existing standardized floating point and this new bounded floating point format are simple operations.”

If you are working with systems that must do extremely accurate and safe floating point calculations, you might want to check out the patent.

Categories: Blogs

Code Health: Too Many Comments on Your Code Reviews?

Google Testing Blog - Tue, 06/20/2017 - 19:20
This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office.

By Tom O'Neill


Code reviews can slow down an individual code change, but they’re also an opportunity to improve your code and learn from another intelligent, experienced engineer. How can you get the most out of them?Aim to get most of your changes approved in the first round of review, with only minor comments. If your code reviews frequently require multiple rounds of comments, these tips can save you time.

Spend your reviewers’ time wisely—it’s a limited resource. If they’re catching issues that you could easily have caught yourself, you’re lowering the overall productivity of your team.Before you send out the code review:
  • Re-evaluate your code: Don’t just send the review out as soon as the tests pass. Step back and try to rethink the whole thing—can the design be cleaned up? Especially if it’s late in the day, see if a better approach occurs to you the next morning. Although this step might slow down an individual code change, it will result long-term in greater average throughput.
  • Consider an informal design discussion: If there’s something you’re not sure about, pair program, talk face-to-face, or send an early diff and ask for a “pre-review” of the overall design.
  • Self-review the change: Try to look at the code as critically as possible from the standpoint of someone who doesn’t know anything about it. Your code review tool can give you a radically different view of your code than the IDE. This can easily save you a round trip.
  • Make the diff easy to understand: Multiple changes at once make the code harder to review. When you self-review, look for simple changes that reduce the size of the diff. For example, save significant refactoring or formatting changes for another code review.
  • Don’t hide important info in the submit message: Put it in the code as well. Someone reading the code later is unlikely to look at the submit message.
When you’re addressing code review comments:
  • Re-evaluate your code after addressing non-trivial comments: Take a step back and really look at the code with fresh eyes. Once you’ve made one set of changes, you can often find additional improvements that are enabled or suggested by those changes. Just as with any refactoring, it may take several steps to reach the best design.
  • Understand why the reviewer made each comment: If you don’t understand the reasoning behind a comment, don’t just make the change—seek out the reviewer and learn something new.
  • Answer the reviewer’s questions in the code: Don’t just reply—make the code easier to understand (e.g., improve a variable name, change a boolean to an enum) or add a comment. Someone else is going to have the same question later on.
Categories: Blogs

Live from Velocity San Jose 2017

Velocity has transformed over the last couple of years – such as how organizations transformed their way of building, scaling and maintaining the software that powers their business. I remember the early days of Velocity where it was all about Web Performance Optimization, then it moved over to Web Scale, DevOps, Building Resilient Systems and now we arrived at this years theme which is: Building and maintaining complex distributed systems!

Last year I was fortunate enough to get Steve Souders on our PurePerformance Podcast. Steve started Velocity and has been a major contributor to the Web Performance and DevOps community. Listen in to hear what motivated him to go on this journey: Listen to “PurePerformance CafeVelocity 2016 with Steve Souders” on Spreaker:

.

Here are my highlights from both days as I was doing some “live blogging”

Thursday, June 22 – LIVE Update from Day 2 @ Velocity 2017

This morning I got in line for Speed Networking. As explained yesterday it was a new concept I haven’t seen before. But its REALLY COOL! Got to meet several new people in a short time frame which I probably wouldn’t have met otherwise. I encourage every organizer of events to think about this concept!

Keynotes & Sessions Day 2

Dave Andrews (Verizon | @daveisangry)

Giving us a glimpse into how Verizon is building against “Cascading failure at scale(s)”. Besides Load Testing, Monitoring and Traffic Routing it is about containing potential problems. Some interesting insights into how the contain traffic issues on local and regional level. More to learn from him on twitter

Dharma Shukla (Microsoft | @dharmashukla)

Giving us insights into ComsosDB. Want to learn more – check it out online!

Cliff Crocker (SOASTA | @cliffcrocker)

Talking about “The False Dichotomy of Finders vs Fixers”. We have all a lot of tools to find problems and highlight them. But we are not really good in actually fixing things. While this is a great business model for consulting companies that are “finders” – but it wont help you in the end!

Reminding us that a lot has changed on the tool and technology space that allows us build better tools that are not only finding but also provide better insights to fix things! But there are also great new web technologies to provide better end user performance, e.g: Preconnect: Resource hints or Server Push

Cliff Crocker reminding us about new tech developmentsCliff Crocker reminding us about new tech developments

Dianne Marsh (Netflix | @dmarsh)

Talking about careers on how we have to look back in order to move forward! Giving us insights into her career path and lessons she learned as an individual contributor but also as manager. Reminded us about “Repeating Trends”

-)Repeating Trends: Not all we thought is new is really new
Categories: Companies

Example Mapping for Behavior Driven Development

Software Testing Magazine - Tue, 06/20/2017 - 18:32
When you use a Behavior Driven Development (BDD) approach, you are going to automated tests for Agile user stories base on acceptance criteria. Defining these acceptance criteria between the product...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Microsoft Visual Studio integration with Nexus Lifecycle

Sonatype Blog - Tue, 06/20/2017 - 16:09
We are excited to announce the availability of the Nexus IQ Server plugin for Microsoft Visual Studio users. Developers who use Visual Studio now have access to the precise component intelligence available in Nexus Lifecycle. They can easily identify which components meet corporate guidelines and...

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

Meet a Tester: Helbe

Testlio - Community of testers - Tue, 06/20/2017 - 09:00
From left: Marko, Helbe, Kiryl, Kristel, and Duncan.

In Meet a Tester, we feature QA experts from our community who share their love for quality and Testlio. Our first interview is with Helbe from Buckeye, Arizona, USA.

1. How did you end up working as a tester for Testlio?

I first heard about Testlio in an article published in Estonian World News. I was very interested in becoming a tester for several reasons: 1) my parents were both Estonian, so I felt an instant connection to Testlio 2) as a retired IT professional (whose last role had a strong quality assurance component) I was intrigued by Testlio’s business model: in particular, the remote, worldwide testing community. I found contact information on their website and inquired about what it took to become a tester. The rest is history.

Categories: Companies

OneAgent & Security Gateway release notes for version 121

OneAgent General improvements and fixes
  • Early Access Program for Linux PowerPC (Little-endian) hosts running on RedHat and CentOS. The EAP includes deep monitoring of Java, Node.js as well as system, network, plugin metrics and log analytics.
  • Garden injection for new garden-runc 1.2.0 release
  • Reporting of CPU usage for Windows protected processes
  • Changes to Erlang grouping. Programs with undiscovered modules will be grouped as ‘Erlang’
  • Plugins – Technology names in plugin.json are now case-insensitive
  • Enhanced remote diagnostics for Docker
Security Gateway
  • Security Gateway is now required for monitoring of large AWS accounts that include 700+ AWS service instances
  • Persisted custom config & trusted.jks for Windows
  • Support for VMware Cloud on AWS

The post OneAgent & Security Gateway release notes for version 121 appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

The Weekly Wrap – Cloud Foundry, Market Share, Perform 2017, Interop and VictorOps

It was a huge week in digital performance last week. In what I hope is the first of a weekly series (depending on your feedback), here’s the lowdown.

Dynatrace is the first monitoring solution to provide full stack insight into Cloud Foundry

We are thrilled to announce that Dynatrace is the first monitoring solution to provide full stack insights into Cloud Foundry clusters — automatically and with no configuration. This includes monitoring of both Cloud Foundry cluster health for platform and resource optimization, and automatic monitoring of your deployed applications.

For more information view: 

Dynatrace Ranks No. 1 in latest Gartner Market Share Analysis Report: Performance Analysis Software, Worldwide 2016

Gartner Market Share APM 2016

For the fifth consecutive year,(i) Dynatrace has been ranked by Gartner Inc., a leading IT research and advisory firm as the number one global Application Performance Monitoring (APM) solution provider.

#Perform2017 kicks off in Europe with more than 850 attendees across 4 cities

This week saw the Perform roadshow hit London, Rome and Milan, with customer presentations from Travis Perkins, COOP, Virgin Money and AWS.

Here are some select tweets from each event to share:
  • London

#perform2017 with #TravisPerkins #COOP #VirginMoney #AWS Standing room only! Superb stories featuring AI, Full Stack, Automation. pic.twitter.com/AbejqkpMUo

— Dynatrace (@Dynatrace) June 15, 2017

  • Madrid

Hoy toca hipodromo, en el #Perform2017 de @dynatraceEspana y la sala está a rebosar. pic.twitter.com/rvEkTYZr7I

— Eugenio Sanz (@Eugeniobdi) June 1, 2017

  • Rome

Iconic location for #dynatrace #perform2017 in Rome, great work and thanks to all customers for joining us pic.twitter.com/yv0G7JfLf0

— Pieter Van Heck (@PieterVHeck) June 13, 2017

  • Milan

#Perform2017 in Milan. Thank you @dynatraceItalia pic.twitter.com/IPb8QWVEYl

— Moviri (@moviri) June 6, 2017

Japan embraces Dynatrace with more than 1000 demos delivered at Interop

It’s phenomenal to see the images and stories coming from Japan, where this week our great partner LAC, took the Dynatrace full stack story to Interop. A country that prides itself on its technology innovation, and a company that leads the market in AI, full stack, automation, meant a record number of demos for our booth staff. Rumour has it over 1000 demos were delivered!

Verizon, AWS and Dynatrace accelerate time to market

In a joint case study between AWS and Dynatrace, Verizon shares how they implemented comprehensive methodology for cloud migration.

VictorOps: Microservices Monitoring and Critical Incident Management

Hear how our partner VictorOps and Dynatrace work together to bring greater intelligence to microservices monitoring and critical incident management. http://bit.ly/2roIOuD

Is that it?

What a massive week. I didn’t even get a chance to mention DevOps London, AWS Public Sector Summit and Cloud Foundry Summit. But all good things have to come to an end.

Thank you to all our customers, booth staff, event organisers, and Dynatrace partners for a massive week. What do you think? Do you like the summary?

The post The Weekly Wrap – Cloud Foundry, Market Share, Perform 2017, Interop and VictorOps appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Walmart Integrates Nexus, OneOps, Jenkins, Kubernetes into Distribution Center Management System

Sonatype Blog - Fri, 06/16/2017 - 20:39
Walmart Logistics is integrating Nexus, Jenkins, Kubernetes, and OneOps open source software components into its management system for 200 plus of its disribution centers in an effort to set up each center as its own cloud. The goal is for each application to function autonomously, just like...

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

Automated Enforcement: The Not So Subtle Difference Between Sonatype Nexus and Everyone Else

Sonatype Blog - Fri, 06/16/2017 - 02:23
We live in an application economy. Software has become the strategic weapon of choice for competing and winning on a global playing field.  This is a world where innovation is king, speed is critical, and open source is center stage.

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

Optimize your dashboard with new filters and custom charts

With the latest release of Dynatrace, we’ve introduced a new way to configure custom charts that makes dashboard creation easier and more intuitive. We’ve also introduced a valuable new type of dashboard tile called a “blinking light” tile.

Create custom charts

Custom charts enable you to analyze any combination of monitoring metrics directly on your dashboard.

To create a custom chart
  1. Select Create custom chart from the navigation menu.
    Alternatively, you can select the Build custom chart tile in the Tile catalog.
    create custom chart
  2. On the Build a custom chart page, select metrics related to the services, applications, processes, or hosts you want to monitor.
    For this example, we’ll select the metric Applications – Actions per session.
  3. Click the Build chart button.
    custom charts
  4. Give your chart an intuitive name.
  5. Adjust the aggregation and display options for the metric you’ve selected. To do this, click the metric name, as shown below.
    custom charts
  6. Once you’ve configured the metric, you can optionally add additional metrics by clicking the Add metric button.
  7. Once you’re satisfied with your new chart, click the Pin to dashboard button to add the chart to your dashboard.
New filters and metrics

Filters make it easy to configure unique combinations of metric data for display on your custom dashboard charts. In the latest release of Dynatrace we enhanced the configuration of metrics and filters for custom charting. The following new metrics and filters are now available can now be added to your custom charts:

  • Additional process filters
  • VMware metrics and filters
  • ESXi metrics and filters

The new metrics can be accessed via the metric drop list on the Build a custom chart page (see Step 3 above) or by clicking the Add metric button on the Custom chart page.

custom charts

Note: You can still create workflow-related charts that focus on relevant subsets of the host-, service-, and database metrics in your environment. You can even combine custom metrics to create new charts that directly support your teams’ unique requirements. For full details, see Can I use filtering to create more sophisticated dashboard charts?

Blinking lights tiles

We’ve introduced a new type of dashboard tile (see examples below). Blinking light tiles enable you to see at a glance how many entities are affected by open problems. Blinking light dashboard tiles focus on a single entity type (e.g., hosts, applications, services, etc). Each green hexagon on a blinking light chart represents a healthy entity (i.e., an entity that is not associated with an open problem). Red hexagons represent entities that are affected by an open problem.

To add a blinking light tile to your dashboard
  1. From your home dashboard, click the Edit button to enter dashboard edit mode. Click the Add (+) button above the dashboard section within which you want the new tile to appear.
     
  2. Select the blinking light tile that’s dedicated to the entity type you want to monitor. Blinking light tiles are currently available for the following entity types: Hosts, Applications, Services, Data centers, Databases, and Web checks. Two different tile sizes are available. In the example below, the small Hosts tile is selected in the Infrastructure section of the tile catalog.
    custom charts
  3. Once pinned to your dashboard, you can click any blinking light tile to visit the corresponding entity list page and begin your analysis of any detected problems.
    You can toggle the size of blinking light tiles by clicking the Toggle size switch available within each tile’s context menu. To retain a blinking light tile on your dashboard and disable the visualization, set the Chart switch to the Off position.
    custom charts

The post Optimize your dashboard with new filters and custom charts appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Become an ISTQB Advanced Level Certified Tester - Live Virtual Class Forming for July 31 - Aug 3, 2017

I hope you can take advantage of a unique opportunity to attend live virtual training for the ISTQB Advanced Security Tester certification this Summer.

The dates of the course are from 9 a.m. to 5:30 p.m. EDT on July 31 - August 3, 2017.

I will be the instructor of the course. As chair of the ISTQB Advanced Security Tester Working Group, I can bring a unique perspective to the training and prepare you to take the exam.

Here's what you need to know:

1. This is a live virtual class that you can take from your desk or home. You will be able to interact with me, ask questions, make comments, etc.

2. This will be an intensive course with over 20 exercises. I will present some material, then we will have exercise time. At the completion of each exercise, I give my perspective about the solutions.

3. We will go over every question in the ASTQB Sample Exam after each major section in the syllabus. There are nine sections in the syllabus.

4. If you can't make all the sessions, I am also including the e-learning version at no extra cost so you can make-up any sessions needed.

5. The exam is not included in the price of the course. However, the exam can be added for $200. You can use the exam voucher at any Kryterion exam center. Please note that while anyone may take the course and gain a lot from it, in order for you to take the exam, you must first hold the ISTQB Foundation Certification (CTFL) and have 3 or more years relevant experience in software testing or a related field.

6.  The course also includes a printed workbook. Please allow 5 - 7 days for printing and shipping the book to you. If you live outside of the USA, allow 14 days to receive the book.

7. After July 15, the registration price increases by $200. So, it's best to register soon.

8. Before registering for the class, please review the course outline and ISTQB Advanced Security Tester syllabus so you will be aware of the topics we will cover. While we do cover penetration testing, this is not a class on penetration testing. This certification and course covers many aspects of cybersecurity and the testing of security defenses.

9.  You will leave the class with an increased knowledge of how to help protect your organization by testing your security defenses to ensure they are working effectively.

10.  This course is fully accredited by the ASTQB.

11.  You can register at https://www.mysoftwaretesting.com/ISTQB_Adv_Security_Tester_Certification_Course_p/istqbseclv.htm

If you have any other questions, please feel free to contact me by phone (405-691-8075) or through the contact form at http://www.riceconsulting.com/home/index.php/component/com_formmaker/Itemid,453/id,1/view,formmaker/.

I hope to see you in the course!

Randy


Categories: Blogs

SonarCFamily Now Supports ARM Compilers

Sonar - Thu, 06/15/2017 - 16:08

For those not familiar with ARM (Advanced RISC Machine), let’s start by sharing some numbers: in 2011, the 32-bit ARM architecture was the most widely used architecture in mobile devices and the most popular 32-bit one in embedded systems (see). Moreover in 2013, 10 billion were produced (see) and “ARM-based chips are found in nearly 60 percent of the world’s mobile devices” (see).

Why ARM is so popular when dealing with embedded systems? Because the RISC architecture typically requires fewer transistors than those with a complex instruction set computing (CISC) architecture (such as the x86 processors found in most personal computers), which reduces cost, power consumption, and heat dissipation. These characteristics are desirable for light, portable, battery-powered devices‍—‌including smartphones, laptops and tablet computers, and other embedded systems.

Most developers targeting this ARM architecture, develop in C or C++ and use a compiler able to produce a binary for ARM machines. Both GCC and Clang support an ARM mode out-of-the-box. But if you want to generate a binary finely tuned to reduce the runtime footprint, you might want to go ahead with the ARM5, ARM6 or Linaro compilers.

SonarCFamily code analyzer version 4.8 adds support for all such compilers, this long-awaited feature finally becomes reality.

Analyzing a C/C++ project targeting the ARM architecture is not different than analyzing any other kind of C/C++ project but as a reminder here are the steps to follow:

# on Windows or on Linux, in a ARM DS-5 enabled environment:
make clean
build-wrapper-[win|linux]-x86-64 --out-dir <output directory> make
# set sonar.cfamily.build-wrapper-output= on sonar-project.properties
sonar-scanner

or, on Linux, from a console without ARM environment:

/usr/local/DS-5_v5.26.2/bin/suite_exec -t "ARM Compiler 5 (DS-5 built-in)" make clean
build-wrapper-linux-x86-64 --out-dir /usr/local/DS-5_v5.26.2/bin/suite_exec -t "ARM Compiler 5 (DS-5 built-in)" make
# set sonar.cfamily.build-wrapper-output=<output directory> on sonar-project.properties
sonar-scanner

Once you have analyzed the ARM compiled source code, you got the full power of the analysis available: hundred of rules available to track the nastiest issues, data-flow analysis included!

Of course, SonarCFamily 4.8 is compatible with SonarLint which means that ARM DS-5 developers using Eclipse or any Eclipse CDT developer will be able to use SonarLint and get their code analyzed on-the-fly. This enables to shorten the development feedback and catch issues “before they exist”!
SonarLint in action on Eclipse ARM DS-5

Categories: Open Source

Dynatrace is first monitoring solution to provide full-stack insight into Cloud Foundry

Dynatrace support for Cloud Foundry applications has been available for some time now, helping application teams better understand and optimize their distributed microservices environments. As we work tirelessly to provide you with full insights into your technology stack, I’m happy to announce that Dynatrace is the first monitoring solution to provide full-stack insights into Cloud Foundry clusters — automatically and with no configuration. This includes monitoring of both Cloud Foundry cluster health for platform and resource optimization, and automatic monitoring of your deployed applications.

Cloud Foundry cluster health monitoring

By deploying Dynatrace OneAgent to your Cloud Foundry VMs, you gain monitoring insights into all Cloud Foundry components, including Diego Cells, Cloud Controller, Gorouter, and more. With these capabilities, Dynatrace enables you to optimize your cluster component sizing, detect failing or under-provisioned components, and leverage AI-powered analytics throughout your entire stack.

Deploying OneAgent to your cluster components gives you health metrics for each VM, including CPU usage, Disk IO, Network IO. It even provides insight into the quality of the network communication of the processes between your Cloud Foundry components.

Automatic monitoring of Cloud Foundry applications, down to the code and query level

Dynatrace full-stack monitoring for Cloud Foundry environments includes built-in auto-injection for Garden-runC containers. This means that Dynatrace OneAgent auto-detects each application that’s deployed to Cloud Foundry and automatically initiates deep application monitoring.

Not only does Dynatrace OneAgent provide metrics for the applications running in Garden containers, it also provides code-level visibility into your distributed application instances.

Deep monitoring provides your microservices teams with the insights required to optimize the performance of services while ensuring complete availability and functionality.

Automatic distributed service tracing

In microservices environments — especially those deployed to Cloud Foundry — automatic distributed service-tracing is a powerful means of continuously and seamlessly tracking the health of the entire microservices architecture.

Service tracing enables tracking of how requests to microservices and Cloud Foundry apps are propagated through a system. Service tracing also helps to identify performance bottlenecks and failed requests in the service-to-service communication chain. It’s never been easier to pinpoint the root cause of poor performance in heterogeneous microservices stacks. Since Dynatrace OneAgent automatically monitors all Cloud Foundry applications on Diego cells, these automated tracing capabilities are automatically applied to your Cloud Foundry applications.

Integrate with your existing BOSH deployments

Dynatrace full-stack monitoring for Cloud Foundry integrates seamlessly with BOSH deployments. Dynatrace provides a BOSH release that you can use as an add-on to deploy OneAgent to your cluster VMs, including Diego Cells and others. The BOSH release also covers deployment of OneAgent to Windows Diego Cells, thereby enabling automatic monitoring of .NET Framework based applications.

For full details on the Dynatrace BOSH add-on, please see How do I deploy OneAgent for full-stack Cloud Foundry monitoring?

We’ve worked with Pivotal to make the Dynatrace Full-Stack Add-on for Pivotal Cloud Foundry available on Pivotal Cloud Foundry. So, if you’re using Pivotal Cloud Foundry, go ahead and download the Dynatrace Full-Stack add-on from the Pivotal Network.

The post Dynatrace is first monitoring solution to provide full-stack insight into Cloud Foundry appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

13 Reasons Why Manual Testing Can Never Be Replaced

Testlio - Community of testers - Wed, 06/14/2017 - 19:00

Some development teams jump into automated testing like it’s the holy grail. And it is—kind of. Automated testing is a great safety net for regression testing and for checking in on redundant components

But we’re strong believers in manual, exploratory testing. Even as automated suites become more sophisticated, they still require human drivers. Actually, automated tests are often converted from initially manual efforts.

Here’s why developers need manual testers, whether outsourced or in-house.

manual-testing-cant-be-replaced.jpg 

1. There’s a whole bunch of testing that simply must be manual

User experience is probably the biggest reason why manual testing is important. We all could use valuable criticism from time to time (even developers!). When it comes not just to functionality but also to first impressions, there’s no replacement for the human eye.

While smoke tests can be automated, they too are better left for manual testing. It’s far quicker for a tester to poke around your app and see if it’s ready for hardcore testing than for a tester to write scripts that would do the same. And early-phase scripts won’t last, anyway.

Plus, only a human can double check language use and other key localization factors in a product targeting multiple regions.

2. Automated testing empowers human testers

Like cars that break for you in an emergency, automated testing is busy while you’re looking away.

Automated software testing saves time with repetitive jobs, so that manual testing efforts can center around coming up with creative use cases.

The most successful use of automated testing isn’t about trying to get it behave like humans, but in enhancing overall product coverage by creating new, unique scripts.

 

manual-testing-empowers-human-testers.jpg

3. Bugs are found where you least expect them

Even when testing for specific use cases, testers can still find bugs that they weren’t necessarily looking for.

That’s a big deal. For some projects, the majority of bugs are actually found by testers that were looking for something else entirely. Automated testing can’t notice errors it wasn’t programmed to find.

4. Humans are creative and analytical

While we all like to bemoan the downfalls of being human (why can’t we fly?!), we do have our good qualities.

The skills and experience that testers bring to the table helps them strategize every time they start a new session. At this point in time, there’s no replacement for our quick mental processing speeds and our dope analysis!

5. Testing scripts have to be rewritten in agile

Working with constant feedback in an agile environment means fluid changes to the product flow, the UI, or even features. And nearly every time, a change entails a rewrite of your automated scripts for the next sprint.

New changes also affect the scripts for regression testing, so even that classic automation example requires a lot of updating in agile. That amount of work warrants consideration when a development team is trying to figure out where to invest resources.

6. Automation is too expensive for small projects

Not only do you have automation software to pay for, but you also have higher associated maintenance and management costs, due to script writing and rewriting, as well as set up and processing times.

For long term projects and big products, the higher costs can be worth it. But for shorter, smaller projects it’s a monumental waste of both time and money.

When calculating the potential ROI for an automation purchase, you have to factor in added man hours, as well.

7. Unless tightly managed, automation has a tendency to lag behind sprints

There’s a difference between what we hope technology can do for us, and the reality of what we can do with it.

With the constant script-updating, it is very hard to keep automated testing on track with sprints. It’s worthless to test fixes that are no longer current. Successful automation starts early on and never falls more than one sprint behind.

If a development team doesn’t have the resources to make that happen, it might be better not to try (unless that team is making a long term investment with plans to improve the process).

8. Manual testers learn more about the user perspective

Humans learn all the live long day. You wouldn’t want to waste that knowledge would you?

Because human testers often act like a user, they provide a lot more value than just knowledge of how the product is currently performing. Testers can also help steer products in new directions with their deliveries of issues and suggestions.

exploratory-testing-helps-us-discover.jpg

9. Automation can’t catch issues that humans are unaware of

This goes back to point #3, that bugs are often found where we aren’t looking. But beyond that, there are also whole use cases and large risks that we may not be immediately aware of.

This natural ignorance can be mitigated with exploratory testing or with exploratory testing that results in the development of new scripts.

No matter the forms of testing a team relies on, up front strategizing is always necessary. But we can never expect to come up with everything on the first go around. For most of what was missed, manual testing is a much faster catch-all.

10. Good testing is repeatable but also variable

The most successful testing has a mix of two factors: repetition and variation. Automated testing is great for the continual checking process, but it’s just not enough. You also want variation, and some wild card use cases.

Combined, these two factors give the highest chance of achieving full product coverage.

11. Mobile devices have complicated use cases

Device compatibility and interactions can’t be covered with automated scripts. Things like leaving and reentering wi-fi, simultaneously running other apps, device permissions, and receiving calls and texts can all potentially wreak havoc on the performance of an app.

Changing swipe directions and the number of fingers used for tapping can also affect mobile apps. Clearly you need a manual tester to get a little touchy-feely if you want your app to have the minimum number of freak outs. 

12. Manual testing goes beyond pass/fail

Pass/fail testing is super cool. We ask our Testlions to conduct use cases with set outcomes all the time. But for most projects, more complicated (and yes variable) scenarios are desirable.

Web forms are a prime example of this. While an automated script could easily input values into a web page, it can’t double check that the values will be saved if the user navigates away and then comes back.

And what about the speed of submission? A human will definitely notice if a web form submits abnormally slow while other websites are loading at top speed.

But speed isn’t something that fits into pass/fail.

13. Manual testers can quickly reproduce customer-caught errors

While you hope you catch all bugs before deploying, you also hope that your customers will kindly let you know of any errors.

Hot fixes are a must for cloud-based products. A manual tester can use the information submitted by the customer to submit a bug report that will be helpful to the engineer.

The time between a customer issue and a fix is way faster with manual testing. Like way.

Yup, automation is awesome. But manual testing is above all a service — one that can’t be automated.

Categories: Companies

Win a Free Pass to Jenkins World 2017!

How would YOU like to attend Jenkins World 2017 for free*?

Jenkins World 2017 will be THE event of the year for all things DevOps, continuous delivery and Jenkins. Jenkins World is the largest gathering of Jenkins® users in the world, including Jenkins experts, continuous delivery thought leaders and companies offering complementary technologies for Jenkins. With keynotes from Kohsuke Kawaguchi, Sacha Labourey and Jez Humble - with 60+ sessions - with all of the learning and networking about the technologies you care about - this is not an event to be missed.

Four (4) lucky winners from all over the world will get the chance to attend Jenkins World 2017 in San Francisco, CA! Simply enter your information for one entry, and then unlock additional ways to enter the contest. The more entries you have, the more chances you have to win! There will be one (1) winner from each of the following regions:

  • North America
  • South America
  • Europe, Middle East, Africa
  • Asia Pacific

This contest closes on JUNE 30, 2017 so enter now!

Learn, network, explore and shape the future of Jenkins - for free! Good luck - We will see YOU at Jenkins World.

Win a Free Pass to Jenkins World 2017!

*Pass does not include add-on training/workshops or travel expenses. Blog Categories: Jenkins
Categories: Companies

Where has the Test Manager Gone?

Gurock Software Blog - Wed, 06/14/2017 - 16:56

Where Has the Test Manager Gone? team working at a table

This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.

From 2006 to 2010 I worked for a company that consisted of distinct testing and development groups. Each group had their own hierarchy, including staff contributors, leads, a manager and maybe an architect. Our Test Manager was the Battle Master general. Every new sprint started with contention. The development groups would have meetings to prepare for the sprint. The testing group I worked in would hear about these meetings a day or two later. When we found out, our Test Manager would jump into action. He spent hours in each sprint listening to the development managers claim that having testers in pre-sprint meetings would slow progress down. He would spend more time trying to explain why having the test group involved in the meetings would save time over the course of the release.

During feature development, our Software Test Manager took on a different role; he became the principle bug advocate. Developers saw new bugs as a distraction from finishing new features. Bugs would accumulate like a trash heap, and then at the end of each day a few Development Managers, the Product Owner, and our Test Manager went to a triage meeting. That meeting was a negotiation. Every software project worked under a limited time scale. One way to work within the time limit is to re-categorize bugs as feature requests and trivial things that don’t need to be fixed. Another way to deal with the problem is to say the bugs couldn’t be reproduced. Our Test Manager was there to advocate the customer, and convince other managers that the problems we reported were problems that should be fixed.

Towards the end of a release cycle, our Test Manager became an umbrella, shielding us from a storm of pressure. He was required to produce daily metrics on bugs reported, regression test velocity, and projections of when we would be done. As a testing group, we were often performing pre-release testing and driving toward releases while features were still being finished. This large amount of shuffle and change created failure demand. For example a feature that works today might not work tomorrow. The question of “why is testing taking so long” never stopped. The question that should have been asked is “why is this code so hard to test”.

It took our Test Manager a lot of time to defend our team. He spent his remaining time encouraging skill development, facilitating a smooth work flow and completing typical administrative managerial duties like hiring and performance reviews.

A lot of the Test Manager’s value was in shielding our team from organizational dysfunction. Our manager had to work as an intermediary because of divisiveness between teams. He had to fight for bug fixes because no one could agree upon what quality and completion meant.

Receive Popular Monthly Testing & QA Articles

Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.



Subscribe
We will never share your email. 1-click unsubscribes. articles Why Test Managers are Disappearing

Where Has the Test Manager Gone?

The teams I have worked with recently do not have a Test Manager.

I am working on a full time UI automation project right now. My days begin by inspecting the automated tests that failed in the overnight run. I start this process by rerunning the test so I can observe what is happening in the browser. Once this is completed, I abandon the automation tool and perform follow up testing. I will examine what happens if a value is different, what happens if we use Chrome instead of Internet Explorer, and what happens if we approach the feature from a different place in the software. When I discover something interesting I start a conversation with the developer. That problem might turn out to be an important bug. It might be a component that is still under development, or it might be something that the customer doesn’t care about. If it is a bug, then the developer fixes the problem, or puts the problem in his work queue.

Rather than a death march, we coast into pre-release testing. The test suites discover most of the bugs created during feature development, which means they can be fixed relatively quickly. Before a release, the project lead merges all of the new code changes from the development branch into customer configuration branches. Over the next few days we watch the test results carefully for surprises. Usually we will find one or two problems exposed by different product configurations. When we do, I demo those problems to the development team, and they get fixed.

There is no intermediary between myself and the development team. No one that acts as a communication barrier. When problems are found, we talk about them. When big product changes are approaching, we talk about them. No one needs to protect the testing group from Development Managers, and no one has to take time away from testing to generate reports.

This team isn’t agile, but we have a lot of the fundamentals down. Rather than having distinct technical teams, we are blended. Programmers work closely with testers and product owners to drive toward production. There are no identifiable hand-offs between skill sets; there is more of a flow. Removing organizational dysfunction removes much of the traditional role of a Test Manager. On this project ancillary Test Manager roles like facilitation; skill development and performance reviews are a matter of personal responsibility. If we want to go to a conference or take a training course, we do. There are no performance reviews to be administered.

Agile has developed into a framework that helps people chop dysfunction out of their teams. One team might have a series of hand-offs between roles. For example, product owners manage the requirements and user stories, programmers take those and turn them into software, and testers take new code and discover problems. Blended teams simply communicate with each other. Over the course of a day, a product owner, programmer, and tester might sit down together to work on a feature. As the product owner explains their vision, the programmer turns that into code. As soon as the new code can be run, the tester can ask questions, build test automation, and investigate the software. In the highest functioning teams, new features are ready to ship when the programmer checks new code and triggers the Continuous Integration system.

What is Next

Where Has the Test Manager Gone? A maze

The future I am seeing is one with fewer, if any Test Managers.

Teams that were once separate and distinct are now small and blended. Walk into the development room at a software company, and you might find small groups of developers and testers sitting together. They are working on the same projects, reporting in the same status meetings, and they work under the same managers. Instead of taking explicit instructions from a manager, these teams are mostly self-directed. They pull their work from the top of an organized queue. When the programmers have implementation questions, they ask the product owner directly.

The organizational hierarchy gets flattened when technical staff can direct their own work. The middleman between the testers and the rest of the team, the Test Manager, is no longer required. The need for a person to keep a pulse on testing status disappears. At this point, there is one thing to keep track of and report on; how close to completion and shipping are we. This can be done by the technical staff.

Some of the larger organizations will have a practice lead or “coach” role. This person works with development teams on test design and teaches skills that would normally live in the Test Manager. However, most people in the Test Manager role now have a choice to make. They can linger in the shrinking number of companies that still require this role, hoping to eek out a few more years, or they can move on to other roles. That might mean moving back to a technical contributor role like programmer or tester. It could also result in moving into a non-technical role like product and account management.

The industry can be slow to embrace new ideas and ways of working in software development. It takes even longer for a critical mass of companies to adopt these ideas. Watching the trends can help us prepare for the future. For those in the role, it is important to consider what your future as a Test Manager will look like. A good question to ask is: “Where do you want to go from here?”

Categories: Companies

LogiGear Releases Free Version of TestArchitect

Software Testing Magazine - Wed, 06/14/2017 - 16:20
LogiGear has announced the newest addition to the TestArchitect family, TestArchitect Team. This new edition of TestArchitect is available online at testarchitect.com and is customized to meet small...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Sauce Labs Launches Test Analytics Platform

Software Testing Magazine - Wed, 06/14/2017 - 16:09
Sauce Labs has announced Sauce Labs Test Analytics, the latest addition to the Sauce Labs Automated Testing Cloud. This latest release provides users with near real-time, multi-dimensional test...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

AutoMapper 6.1.0 released

Jimmy Bogard - Wed, 06/14/2017 - 13:25

See the release notes:

v6.1.0

As with all of our dot releases, the 6.0 release broke some APIs, and the dot release added a number of new features. The big features for 6.1.0 include those for reverse-mapping support. First, we detect cycles in mapping classes to automatically preserve references.

Much larger however is unflattening. For reverse mapping, we can now unflatten into a richer model:

public class Order {  
  public decimal Total { get; set; }
  public Customer Customer { get; set; } 
}
public class Customer {  
  public string Name { get; set; }
}

We can flatten this into a DTO:

public class OrderDto {  
  public decimal Total { get; set; }
  public string CustomerName { get; set; }
}

We can map both directions, including unflattening:

Mapper.Initialize(cfg => {  
  cfg.CreateMap<Order, OrderDto>()
     .ReverseMap();
});

By calling ReverseMap, AutoMapper creates a reverse mapping configuration that includes unflattening:

var customer = new Customer {  
  Name = "Bob"
};
var order = new Order {  
  Customer = customer,
  Total = 15.8m
};

var orderDto = Mapper.Map<Order, OrderDto>(order);

orderDto.CustomerName = "Joe";

Mapper.Map(orderDto, order);

order.Customer.Name.ShouldEqual("Joe");  

Dogs and cats living together! We now have unflattening.

Enjoy!

Categories: Blogs

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today