Skip to content

Feed aggregator

Jenkins World 2017 Community Awards - Last Call for Nominations!

This is a guest post by Alyssa Tong, who runs the Jenkins Area Meetup program and is also responsible for Marketing & Community Programs at CloudBees, Inc. We have received a good number of nominations for the Jenkins World 2017 Community Awards. These nominations are indicative of the excellent work Jenkins members are doing for the betterment of Jenkins. The deadline for nomination is this Friday, June 16. This will be the first year we are commemorating community members who have shown excellence through commitment, creative thinking, and contributions to continue making Jenkins a great open source automation server. The award categories includes: Most Valuable Contributor - This award is...
Categories: Open Source

Jenkins World 2017: International Program

Jenkins World is the largest gathering of Jenkins® users in the world, including Jenkins experts, continuous delivery thought leaders and companies offering complementary technologies for Jenkins. So, to ensure our International crowd doesn’t get left behind we have composed a dedicated program for all attendees traveling to Jenkins World from outside the United States. The program gives access to everything that a full conference registration includes (keynotes, breakout sessions, expo access and meals) along with these additional events:

  • Tuesday, August 29 (Morning) - Optional: Training and Certification All Day
    • Get access to pre-conference training at Jenkins World 2017 (additional charge) and take advantage of a FREE certification exam (no charge) with your registration for the conference.
  • Tuesday, August 29 (Evening) - Jenkins World Kickoff 7:30pm PDT
    • The Jenkins World kickoff event for International Program attendees at the Autodesk Gallery
 will be hosted by Kohsuke Kawaguchi. Join us for a night of dinner, drinks, a DJ and networking with other international attendees, Jenkins contributors and CloudBees management. RSVP by August 15 here to secure your spot!
  • Thursday, August 31 - International Program Private Lunch 12:30pm-1:30pm PDT
    • Join us for an International Program private lunch hosted by Kohsuke Kawaguchi. This lunch will be conducted as a Birds of a Feather lunch with CloudBees and Jenkins community leaders from across the world.

The only criteria for this dedicated program is to be from outside of the US.

To register simply visit the International Program website.

We look forward to seeing you there!

Blog Categories: Jenkins
Categories: Companies

International Symposium on Software Testing and Analysis, Santa Barbara, USA, July 10-14 2017

Software Testing Magazine - Tue, 06/13/2017 - 10:00
ISSTA is an international research symposium on software testing and analysis. This conference brings together academics, industrial researchers and practitioners to exchange new ideas, problems, and...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

ChinaTest, Beijing, China, July 16-19 2017

Software Testing Magazine - Tue, 06/13/2017 - 08:00
ChinaTest is a four-day conference about software testing that will take place in Beijing. It is part of the TiD Conference that integrates resources and forces of SPIChina, ChinaTest and AgileChina....

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Blue Ocean 1.1 - fast search for pipelines and much more

The Blue Ocean team are proud to announce the release of Blue Ocean 1.1. We’ve shipped a tonne of small improvements, features and bug fixes here that will make your day-to-day experience with Blue Ocean even smoother. Today is also the first time we are promoting our Public Roadmap. We recognise that using JIRA can be a bit of a pain to track what we are working on at a macro level and the Public Roadmap makes it very easy for anyone to find out what we are working on. We’ve got some really cool stuff coming, so check back here soon! It’s been...
Categories: Open Source

Heartbleed: The Open Source Vulnerability that Keeps on Giving (and Taking)

Sonatype Blog - Mon, 06/12/2017 - 21:49
Disclosed in April 2014, Heartbleed is the vulnerability gift that keeps on giving to some -- and taking away from others.  The latest example of this dynamic surfaced today when ICO, the UK's data regulator, levied a £100,000 fine against the Gloucester City Council for poor hygiene which...

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

Building Quality into Software Development

Software Testing Magazine - Mon, 06/12/2017 - 17:11
Scaling software development teams can sometimes be a problem for fast-growing startups. How can you keep the quality of the code when you start hiring more and more software developers. In his blog...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

A Test Manager?

Hiccupps - James Thomas - Mon, 06/12/2017 - 09:31

CEWT #4 was about test management and test managers. One of the things that became apparent during the day was how much of a moveable feast the role associated with this title is. And that reflects my own experience.

A few months ago, when discussing courses for the line managers in the Test team, a trainer outlined what his course would cover and asked whether I'd got any heuristics for management. I gave him these, none of which were included in his synopsis:
  • Clear and present. (Say what you think and why, and what you are committed to; encourage and answer any question; be approachable, available and responsive, or say when you can be)
  • It’s all about MOI. (Motivation: explain why we are doing what we’re doing; Organisation: set things up to facilitate work, opportunities; Innovation: be ready with ideas when they’re needed)
  • Congruency in all decisions. (Consider the other person, the context, yourself)

In advance of CEWT, one of my team asked me what I felt my responsibilities as a Test Manager are. Off the top of my head, I suggested they included following:
  • Provide appropriate testing resource to the business.
  • Assist in the personal development of my staff.
  • Develop relationships in my teams, with my teams, across teams.

At the pub after CEWT last night I was asked what I did as a Test Manager. I replied that it's changed a lot over time, but has encompassed situations where:
  • I was the sole tester. (And also learning how to be a tester.)
  • I was planning and scheduling the testing for a small test team, working on a single product. (And also learning about planning and scheduling for others.)
  • I was planning assignments of testers to projects and teams across products. (And also learning about how to work without knowing so much about some of the work my team are doing.)
  • I was managing larger and larger teams. (And learning how to be a better manager.)
  • I was delegating larger and larger projects to other testers. (And learning how to help others to manage larger projects.)
  • I was keeping track of more and more projects across the company, as we grew. (And learning about finding ways to get the right information at the right costs.)
  • I was delegating line management responsibility to other testers. (And learning about how to help others find and express themselves in line management roles.)

Ask a slightly different question, or a different test manager, or in a different context, or about a different time ...

Get a different answer.
Image: https://flic.kr/p/cg54Sd
Categories: Blogs

Does certification have value or not?

I read a blogpost in Dutch named “Does certification have value or not?” by Jan Jaap Cannegieter. I wanted to reply, but there was no option to reply, so I decided to turn my comments into a blogpost. Since the original blogpost is in Dutch I have translated it here.

The proponents claim that you prove to have a foundation in testing with certification, you possess certain knowledge and it supports education.” (text in blue is from the blogpost, translated by me).

Three things are said here:

  1. prove to have foundation
    Foundation? What foundation? You learn a few terms/definitions and an over-simplified “standard” process? And how important is this anyway? Also the argument of an common language is nicely debunked by Michael Bolton here: “Common languages aint so common
  2. possess certain knowledge
    When passing an exam, you indicate to be able to remember certain things. It doesn’t prove you can apply that knowledge. And is that knowledge really important in our craft? I think knowledge is over appreciated and skills are undervalued. I’d rather have someone who has the skills to play football well instead of somebody who knows the rules. From a foundation training, wouldn’t you at least expect to learn the basic testing skills? In no ISTQB training, students use a computer. Imagine giving someone a driver’s license without having ever sat in a car …
  3. supports education
    Really? Can you tell me how? I think the opposite is true! As an experienced teacher (I also did my share of certification training in the past), my experience is that there is too much focus on passing the exam rather than learning useful skills. Unfortunately, preparing the students for the exam takes a lot of time and focus away from the stuff that really matters. Time I would rather use differently.

Learning & tacit knowledge

So how do people learn skills? There are many resources I could point to. Try these:

In his wonderful book “The psychology of software testing” John Stevenson talks about learning op page 49:

The “sit back and listen” approach can be effective in acquiring information but appears to be very poor in the development of thinking skills or acquiring the necessary knowledge to apply what has been explained. The majority of trainers have come to realise the importance of hands on training “Learn by doing” or “experiential learning”.

John points to resources like: Learningfromexperience.com and the book “Experiential learning: experience as the source of learning and development” by David Kolb. Also Jerry Weinberg has written books on experiential learning.

The resources on learning skills mentioned by me earlier, will tell you that experienced people know what is relevant and how things are related. Also practice, experimentation and reflection are important parts of learning. Learning of a skill  depends heavily on tacit knowledge. On page 50 in his book John Stevenson writes:

Pakivi Tynjaklak makes an interesting comment in the International Journal of Educational research: “The key to professional development is making explicit that which has earlier been tacit and implicit, and thus opening it to critical reflection and transformation” – This means that what we learn may not be something we can explain easily (tacit) and that as we learn we try to find ways to make it explicit. This is the key to understanding and knowledge when we take something which is implicit and make it explicit. Therefore, able to reflect on what is learned and explaining our understanding.

And since testing is collecting information or learning about a product, the importance of tacit knowledge also applies to testing: John writes in his book on page 197:

However testing is about testing the information we do not know or cannot explain (the hidden stuff). To do this we have to use tacit knowledge (skills, experience, thinking) and we need to experience it to be able to work it out. This is what is meant by tacit knowledge“.

Back to the blogpost:

The opponents say certification only shows that you’ve learned a particular book well, it says nothing about the tester’s ability and can be counterproductive because the tester is trained to a standard tester.

  • Learned a particular book
    Agree, see arguments 1 and 2 above.
  • it says nothing about the tester’s ability
    Agree, see my argumentation on skills in point 2 above: “knowledge is over appreciated and skills are undervalued”. To learn we need practice and reflection. Also tacit knowledge is an important part of learning.
  • Trained to a standard tester
    Agree. No testing that I know of, is standard. Testing is driven by context. And testers with excellent skills have the ability to work in any context without using standards or templates. Have a look at the Ted Talk by Dr. Derek Cabrera “How Thinking Works“. He explains that critical thinking is a skill that is extremely important. Schools (and training providers)  nowadays are over-engineering the content curriculum: students do not learn to think, they learn to memorize stuff. Students are learned to follow instructions, like painting by the numbers or fill in templates. To fix this, we need to learn how to think better! Learning to paint by numbers is exactly what certification based on knowledge does with testers! Read more about learning, thinking and how to become an excellent tester in one of my earlier blogpost: “a road to awesomeness“.

Comparison with driving license
Does a driving license show anything? Well, at least you have studied the traffic rules well and know them. And, while driving, it is quite useful if we all use the same rules. If you doubt that, you should drive a couple of rounds in Mumbai.

In testing we should NEVER use the same rules as a starting point. “The value depends on the context!” . Driving in Mumbai or anywhere by strictly adhering to the rules, will result in accidents and will get you killed. You need skills to drive a car and be able to anticipate, observe, respond to unexpected behaviour of others, etc. This is what will keep you out of trouble while driving.

As I explained earlier on the TestNet website: this comparison is wrong in many ways. For a driver’s license, you must do a practical exam. And to pass the practice exam, most people will take lessons! You will be driving for at least 20 hours before your exam. And the exam is not a laboratory: it means you go on the (real) road with a real car. A multiple-choice exam does not even remotely resemble a real situation. That’s also how pointless ISTQB or TMap certificates are. Nowhere in the training or the exam, the student uses software nor does the the student has to test anything!

This is the heart of the problem! People do not learn how to test, but they learn to memorize outdated theory about testing. Unfortunately in many companies new and inexperienced testers are left unattended in complex environments without the right supervision and support!

So what would you prefer in your project: someone who can drive a car (someone who has the basic skills to test software), or someone knows the rules (someone who knows all the process steps and definitions by heart?). In addition, ISTQB states that the training is intended for people with 6 months of experience here.  So how are new testers going to learn the first 6 months?

The foundation for a tester?

The argument that the ISTQB foundation training provides a basis for a beginner to start is nonsense! It teaches the students a number of terms and a practically unusable standard process. In addition, there is a lot of theory about test techniques and approaches, but the practical implementation is lacking. There are many better alternatives as described in the resources earlier in this blogpost: learning by doing! Of course with the right guidance, support and supervision. Teach beginners the skills to do their work, as we learn the skills to drive a car in driving lessons. In a safe environment with an experienced driver next to us. Until we are skilled enough to do it without supervision. Sure, theory and explicit knowledge are important, but skills are much more important! And we need tacit knowledge to apply the explicit knowledge in our work.

So please stop stating that foundation training like TMap and ISTQB are a good start for people to learn about testing. There aren’t. Learning to drive a car starts with practicing actually driving the car.

Jan Jaap states he thinks a tester should be certified: “And what about testers? I think that they should also be certified. From someone who calls himself a professional tester we may expect some basic knowledge and knowledge about certain methods?“.
I think we may expect professional testers to have expertise in different methods. They should be able to do their job, which demands skills and knowledge. We may expect a bit more from professional testers than only some basic knowledge and knowledge about methods.

“Many of the well-known certification programs originated when IT projects looked very different and, in my view, these programs did not grow with the developments. So they train for the old world”

Absolutely true.

“Another point where the opponents have a point is the value purchasing departments or intermediaries attach to certificates. In many of the purchasing departments and intermediaries, the attitude seems that if someone has a certificate, it is also a good tester. And to say that, more is needed.”

It is indeed very sad that this is the main reason why certificates are popular. Many people get certificated because of the popular demand of organisations who do not recognise the true value of these certificates. Organisations are often not able (or do not what to spend the time needed) to recognise real professional testers and so they rely on certificates. On how to solve this problem I did a webinar “Tips, Tricks & Lessons Learned for Hiring Professional Testers” and wrote an article about it for Testing Circus.

Learning goals & value

On the ISTQB website I found the Foundation Level learning goals. Let’s have a look at them. Quotes from the website are in purple.

Foundation Level professionals should be able to:

  • Use a common language for efficient and effective communication with other testers and project stakeholders.
    Okay, we can check if the student knows how ISTQB defines stuff with an exam. However, understanding what it means or how to deal with it in a daily practice is very different. Also, again, common language is a myth.
  • Understand established testing concepts, the fundamental test process, test approaches, and principles to support test objectives.
    Concepts and test process: okay, you can check if a student remembers these. However, the content is old and outdated and in many places incorrect! I think understanding approaches cannot be checked in a (multiple-choice) exam. Maybe some definitions, but how to apply them? No way.
  • Design and prioritize tests by using established techniques; analyze both functional and non-functional specifications (such as performance and usability) at all test levels for systems with a low to medium level of complexity.
    Design and prioritize tests? Interesting. Where is this trained? Or being tested in the exam? Analyse specifications? That is not even part of the training. Applying some techniques is, but there is a lot more to designing and prioritize tests and analysing specifications.
  • Execute tests according to agreed test plans, and analyze and report on the results of tests.
    Execution of tests nor analysis or reporting of test results is part of the exam. In class only the theory about test reporting is discussed but never practiced.
  • Write clear and understandable incident reports.
    How do you check this with a multiple choice exam? And how you train this skill without actually testing software in class? No exercises in class that actually ask you to write such reports.
  • Effectively participate in reviews of small to medium-sized projects.
    The theory about reviews is part of the class. To effectively participate in reviews, you need to do it and learn from experience.
  • Be familiar with different types of testing tools and their uses; assist in the selection and implementation process.
    Some tools and their goals and uses are mentioned in class. So I will agree with the first part. But to assist in selection and implementation, again you need skills.

So looking at the learning goals above, I doubt if the current classes teach this. The exam for sure doesn’t prove that a foundation level professionals is be able to do this things. A lot of promises that are just wrong! Certificate training like ISTQB-F and TMap as they are now are simply not worth the money! The training and exam are mostly 3 days and cost around 1.700 euro in the Netherlands. I think that is a crazy investment for what you get in return… There are better ways to invest that money, time and effort!

I think that a more valuable 3 day foundation training is doable. But surely not the way it is done now by TMap or ISTQB. I’ve written a blog post about it years ago: “What they teach us in TMap Class and why it is wrong!“.

More blogs / presentations about certification:

Categories: Blogs

A Thousand Points of Light: Critical Performance Insights from Wire Data

Modernizing and optimizing. Transforming. If you’re in IT, you hear these terms frequently. They likely mean different things to different organizations, but there are a couple of recurring themes. Increasing agility to respond in real time to shifts in business demands. Managing the costs associated with increased complexity, through automation and intelligence as well as rationalization and consolidation. Cloud – private, hybrid, public – is not the result; rather, cloud is a means of reaching for these goals.

Inherent in this modernization shift is a transition – sometimes subtle, sometimes seismic – from static traditional architectures (got any 3-tier apps left?) and proprietary platforms to new paradigms of virtualization, microservices, and software-defined everything.

What does this shift mean for monitoring visibility? Specifically, for critical performance insights sourced from wire data? As network and application architectures change to support modernization goals, traditional approaches to monitoring must also adapt. Gone are the days where a SPAN on a core switch could provide comprehensive visibility into users accessing your entire application portfolio. Today, these core aggregation points have exploded into dozens or hundreds of smaller points of light. As a result, some vendors claim agents are the answer; some even suggest that NPM may be dead.

Long live wire data

This physical to virtual technology shift has many ramifications. From a network visibility perspective, it has disrupted the status quo, creating access challenges that are today being addressed by packet broker vendors such as Ixia. In fact, the approach remains consistent, mirroring the same physical to virtual shift. Modern and optimized visibility architectures incorporate virtual taps to complement (or supplant) physical taps, aggregating and pruning packets as appropriate to deliver clean traffic streams to monitoring tool destinations.

So don’t let your architecture dictate your level of performance monitoring. If it has been important to include wire data in your data center APM strategy, and you’re migrating these apps to the cloud (or your data center is becoming more cloud-like), won’t the same level of visibility still be important after the shift?

Listen to Dynatrace’s Jason Suss and Ixia’s Keith Bromley as they chat with the folks at LMTV. You’ll hear them discuss visibility architectures from the data center to the cloud, the importance of wire data APM, even compare apps to autos. You may also be interested in this free eBook, co-authored by Dynatrace and Ixia: Operational Visibility in the Software-Defined Data Center.

The post A Thousand Points of Light: Critical Performance Insights from Wire Data appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

The Tweets You Missed in May

Sonar - Fri, 06/09/2017 - 15:22

Here are the tweets you likely missed last month!

SonarJava 4.9 Released: toward the goal to have more than 90% of the bugs being highly relevant, and 4 new rules.https://t.co/DVKoUmfNUE pic.twitter.com/w57Nxw9qgO

— SonarQube (@SonarQube) May 15, 2017

SonarC# 5.10 Released: 9 new rules and lot of improvements https://t.co/DhEnV6strV.
See example of unconditional jump in Roslyn pic.twitter.com/VwP4MpJcIp

— SonarQube (@SonarQube) May 12, 2017

SonarJS 3.0 Released: Being Lean and Mean in JavaScript. Blog Entry: https://t.co/bItwkfgsSf and Product News: https://t.co/TjXHmKXvaR pic.twitter.com/X9lAqi8q9k

— SonarQube (@SonarQube) May 1, 2017

SonarPython 1.8 Released: to track unused and self-assigned variables.
see https://t.co/8x0LzBYOqZ#python pic.twitter.com/QQLQ09aumT

— SonarQube (@SonarQube) May 22, 2017

SonarLint for IntelliJ 2.10 released: many rules fine-tuned for high accuracy in bug detection and 4 new rules https://t.co/YRbsOhWSnk pic.twitter.com/n0NHf5B8az

— SonarLint (@SonarLint) May 30, 2017

SonarLint for Eclipse 3.1 refines the analysis of JavaScript to focus on bugs. https://t.co/QMrab1IAlw pic.twitter.com/D2wgkoNgL1

— SonarLint (@SonarLint) May 10, 2017

SonarLint for Visual Studio 2.13 brings 9 additional rules https://t.co/b8G4lBX6Qv pic.twitter.com/X28DsueVke

— SonarLint (@SonarLint) May 8, 2017

Categories: Open Source

How to update app.config file using PowerShell?

Testing tools Blog - Mayank Srivastava - Fri, 06/09/2017 - 12:49
Below code will help to update app.config file with given data- #It helps to connect db and get the data. $HostName = $env:computername $connectionstring = 'Server=XX.XX.XX.XX;Database=TestDataBase;User Id=VM;Password=VMTest;MultipleActiveResultSets=True' $connection = New-Object System.Data.SqlClient.SqlConnection $connection.ConnectionString = $connectionString $connection.Open() $query = "SELECT [Params].value('(/root//Version/node())[1]', 'nvarchar(max)') as FirstName from Request where [Params].value('(/root//Name/node())[1]', 'nvarchar(max)') = '"+$HostName+"'" $command = $connection.CreateCommand() $command.CommandText = $query…
Categories: Blogs

Quality Excites, Gliwice, Poland, June 23-24 2017

Software Testing Magazine - Fri, 06/09/2017 - 10:00
Quality Excites (QE) is a free two-day conference on software testing and software quality that take place in Gliwice, Poland. It provides lectures and workshops for professional software testers who...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Test Automation Day, Rotterdam, Netherlands, June 22 2017

Software Testing Magazine - Fri, 06/09/2017 - 09:00
Test Automation Day is a one day multi-tracks software testing conference organized for software testers and IT professionals in Rotterdam, the Netherlands. It features talks and workshops by...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Comey Hearings: What digital experience management means to news media

Politics aside, today’s testimony by James Comey provides a fascinating look at how events can impact Digital Experience Management for news media organizations. I’m using the term Digital Experience Management (DEM) because the industry (Including Gartner and Forrester Research) has identified that Digital Experience needs to be considered and managed in a unique way. DEM draws the relationship between performance, availability, and end-user/consumer behavior when they interact with digital properties like web sites, mobile applications, etc.

I’m looking at various news providers using Dynatrace technology to illustrate how complex web applications require a new methodology and approach for understanding DEM impact.  Performance metrics are key here and I’ll explain more on this in a moment.

To give you an example of what we are seeing, below is a performance comparison of 20 different news media organizations as observed from locations across the US.

As you can see there is a huge difference from a performance perspective between these different news media organizations. The performance of these sites can be impacted by a wide variety of variables. Some of these include object size (page weight), cached resources, third-party contributors, client side code (javascript), and even server-side responsiveness.

Below is an example of how Dynatrace analyzes an individual page load for a real user and identifies key performance items.

On which web performance metrics should digital experience management focus?

Below is an example of an Analysis Dashboard of the front page for a major news outlet.

Let’s go over what these “tiles” would tell a digital business owner at a news organization.

The Response Time & Success Rate tile (top left) provides a performance trend view which shows aberrations and events which could be impacting end users. It’s also useful to know when you have recovered from an event and the degree to which performance has been change by an event.

The Geographic Response Time tile (bottom left) shows response time by specific regions.  This is important especially if you are using a CDN (Content Delivery Network) as high regional response times can be associated with an oversubscribed PoP (CDN Point of Presence) or misrouted traffic. CDN services are expensive, and this is a way to help manage your technology investments.

The Contribution by Domain tile (second in from the top) highlights the impact that third parties like social media, ad networks, analytics/tracking tools are having on end-user performance.  This view helps you manage technology investment and risk associated with a third-party touching your customer.

The Key Delivery Indicators tile (second in from the bottom) shows observed byte count (how much data was delivered). This often gets overlooked by retailers but will show issues related content that is not optimized (what happens when the creative team release a 10MB juggling monkey image to the landing page), or malicious activity (what happens when a hacker re-routes your site to their page). Metrics like Object, Connection and Host count also provide an indication as to the complexity of the site and if something unexpected is occurring.

Let’s switch to the right side of the screen.

The DNS performance tile (top second from the right) shows DNS resolution time. DNS can be thought of as phone book, routing site names to server addresses. Again this often goes overlooked. However the DDoS attack on DNS provider Dyn on October 21, 2016 shows that DNS is critically important. Knowing when/if your DNS is being impacted allows you to make changes and recover faster. It also allows you to understand if you are investing in the right partner for providing DNS.

The Network Latency tile (top right) is a measure of how healthy your network connections are. This data can be used to understand if you have peering issues with your network providers, or if your network infrastructure (load balancer) is under pressure.

The Server Response time tile (bottom, second from the right) is a measure of how fast the server can respond to a request. This allows you to understand from an end point of view if the server applications are causing a performance bottleneck, we will come back this later.

The last tile on the bottom right, shows Client Impression time.  This allows you to understand how long does it take for the browser/mobile browser to display something for the end user.  Understanding what is happening in the user’s browser is the final link in the chain.

Digital Experience Management and top-line revenue

News Media organizations primarily generate revenue through displaying advertisements to readers/viewers and subscriptions. When it comes to driving revenue from ad impressions, keeping the user on the site is key. This is what the industry calls “stickiness”. Performance is a key contributor to end-user behavior. One of the ways Dynatrace tracks this is by executing a Bounce Rate analysis. In the graph below you can see that as performance worsens (the time along the bottom) the higher the bounce rate becomes. These are readers/viewers navigating away (bouncing) from the site because the page takes too long to load.

If performance is poor, users will not remain on the site and the number of ad impressions will drop. This directly impacts the top-line revenue generation for a news media website.

Also, code on the page can cause issues which prevent an ad from loading or being seen.  Below is an example of how Dynatrace discovers a Javascript Error on a page. You can see a screenshot showing a blank region of a page where an ad should be located.

When we look at the Javascript error we can see it is an issue with code coming from an ad provider which is failing and causing the ad to not display.

These JavaScript errors also impact top-line revenue for a news media outlet because there is no ad displaying for a reader/viewer when the error occurs.

Digital Experience Management needs insight into the back end

We mentioned that performance can impact top-line revenue when readers/viewers bounce off a news site. One of the contributing factors to poor performance comes from the “back end”. The “back end” in this case refers to servers which respond to page requests, are hosted by the news site or cloud-based servers and services.

Below is a comparison of ten news media companies which provide the fastest server-side “back end” response times, and ten news media companies providing the slowest response times. The fastest sites provide response times faster than 200 milliseconds from their servers, and the slower outlets can take over a half a second.

While these response times might sound fast, the slower server-side response times can be expensive for the news outlet (and not just for the reader/viewers bouncing off slow pages). When you add up the processing required to service millions of visits, the sites providing the slower response times are paying more to service the same number of viewers as the sites providing the faster response times. This is all about computational capacity. The slower a transaction is on the server side the more compute resources it consumes. Compute resources, whether or not you host then yourself or use them from the cloud, cost money.

The applications which run these news websites and mobile apps are exceedingly complex. The complexity is so great that effective DEM data needs to be augmented with Artificial Intelligence based analyses to understand all of the dependencies which exist. Below is an example of a Dynatrace Smartscape automatically discovering all of the compute resources that would exist for any of the news media organizations we looked at today.

What’s going on behind the curtain?

While everyone is watching the political theater today, what interests me is happening behind the scenes. Events like this drive traffic to news outlets, however, depending on how that news site is being delivered, there can be a substantial impact on digital experience, which can lead to frustrated readers/viewers bouncing off the site. Poor digital experience impacts the ability to generate revenue from ad impressions for news sites. The news is a highly-competitive market, and the technology driving it is increasingly complex. To remain competitive news sites need to look for new ways to managing their digital experience.

OK, back to watching some political theater.

The post Comey Hearings: What digital experience management means to news media appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

The High Price of Delayed Feedback

Gurock Software Blog - Thu, 06/08/2017 - 22:55

The High Price of Delayed Feedback

This a guest posting by Matt Heusser. Matt is the Managing Director of Excelon Development, with expertise in project management, development, writing, and systems improvement. And yes, he does software testing too.

Most of us are familiar with the development benefits of small batches. The company can release the most important features and gain value in weeks, days, or hours, instead of waiting to release the entire kitchen sink in six months. Even if we did not realize this, Agile Dogma says that small batches are better, so people are inclined to release more often.

Regression testing is probably the single biggest struggle for teams I see moving to something like Scrum; fitting regression testing into a two-week cycle is more than a bit of a challenge. Instead, teams often sprint along for two, three, four, five sprints, then have a “hardening sprint” or two (or three), that are essentially test/fix/retest sprints.

The idea here is to reduce time spent testing, compared to other activities. Yet teams that try this rarely see that benefit. Instead, they release software less often while the time for regression testing balloons. Of course it does.

Receive Popular Monthly Testing & QA Articles

Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.



Subscribe
We will never share your email. 1-click unsubscribes. articles Code Changes Create Uncertainty

The High Price of Delayed Feedback

Every time a programmer makes a change, they create a little bit of uncertainty. If the change is small, a “one point story” for example, and a software tester tests the story the same day, the uncertainty decreases to a certain degree. It is possible that the change had some unintended consequence somewhere else in the software.

These changes add up over time. Eventually, we have the regression-test, fix/retest cycles. The longer we wait between testing, the more uncertainty and more defects we will have, which means we need more testing with better coverage. Because more calendar time will have elapsed, tracking the defect to a change, and to the programmer who created that change, will be hard or impossible. The more time that has gone by, the further the programmer assigned to the fix will be from that piece of code. As a result, fixes are less likely to be correct and will require more debugging.

The graph below shows the problem, comparing a team that ships every two-week sprint with a team that ships at the end of each “project.” The lines that go up are programming, which increases uncertainty, while the lines that go down are the regression-test steps, which decrease it.

Graph showing uncertainty over time for Scrum and the Larger Batch Approach. Uncertainty rises more regularly and much less in Scrum than in the Larger Batch Approach, where uncertainty will eventually rise to much larger amounts.

Note that uncertainty tends to build slowly over time. Under scrum, it is possible to spend an entire sprint on bug fixing, testing, and eliminating technical debt. In contrast, with a waterfall, at least in my experience, the drum beat of project delivery means we ship when the deadline hits, and immediately start working on the next project.

Take a look at the diagram above for a minute. Think about uncertainty. Notice: The longer the development phase, the more uncertainty, the more time we need for test/fix/retest cycles. This leads to Heusser’s rule of regression testing:

Shipping more frequently means less uncertainty, means less test/fix/retest cycles, means shorter regression test effort.

While we tend to understand this at the team level, when it comes to teams-of-teams, we often get into trouble. For example, the Scaled Agile Framework starts with an example of a Product Increment, or PI, typically structured as three or four development sprints followed by three hardening sprints.

Most people interpret “hardening” as test/fix/retest, but the real intent behind the term was for the team to integrate with each other’s work. So, while one team may have perfectly good-to-go software each release, they might rely on another component that does not exist yet, or is being changed. In that case, for a short period, I can understand hardening sprints as a transition concept. If the transition never ends and the software is not suitable for shipping per sprint, the cost of testing will go up while uncertainty will rise. Eventually, those hardening sprints will become testing sprints. The economy of testing less often will be proved false.

If it hurts, do more of it. If integration is expensive, do it more often. Here’s why:

Release-Test Calculus

The High Price of Delayed Feedback

When Issac Newton wanted to find the area under a curve, he started off with approximating the area using boxes. Four smaller boxes would be more accurate than two, and eight more accurate than four, and so on. Eventually Newton realized that an infinite number of infinitely small boxes would be the most accurate.

If Heusser’s rule is true, it leads to the wacko idea that, on some projects, we can release so often that we have an infinite number of infinitely small regression-test cycles. Dividing infinity by infinity, we find the regression-test cycles go away.

Now, that isn’t every project, and most teams have a lot of work to do so they can get to that place, including reducing the defect-injection rate, isolating the components, having independent deploys, and the ability to find and rollback problems quickly. Nor does regression-testing go away, instead, it happens all the time, with a constantly updated list of emergent risks investigated by humans, combined with smart automated checking by the computer.

This is Heusser’s rule of regression-testing:

“Shipping more frequently means less uncertainty, means less test/fix/retest cycles, means shorter regression test effort.”

Put that on a t-shirt. More importantly, change behaviors to recognize that as reality.

Because it is.

Categories: Companies

Code Health: Reduce Nesting, Reduce Complexity

Google Testing Blog - Thu, 06/08/2017 - 20:24
This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office.

By Elliott Karpilovsky

Deeply nested code hurts readability and is error-prone. Try spotting the bug in the two versions of this code:

Code with too much nesting Code with less nesting
response = server.Call(request)

if response.GetStatus() == RPC.OK:
if response.GetAuthorizedUser():
if response.GetEnc() == 'utf-8':
if response.GetRows():
vals = [ParseRow(r) for r in
response.GetRows()]
avg = sum(vals) / len(vals)
return avg, vals
else:
raise EmptyError()
else:
raise AuthError('unauthorized')
else:
raise ValueError('wrong encoding')
else:
raise RpcError(response.GetStatus())
response = server.Call(request)

if response.GetStatus() != RPC.OK:
raise RpcError(response.GetStatus())

if not response.GetAuthorizedUser():
raise ValueError('wrong encoding')

if response.GetEnc() != 'utf-8':
raise AuthError('unauthorized')

if not response.GetRows():
raise EmptyError()

vals = [ParseRow(r) for r in
response.GetRows()]
avg = sum(vals) / len(vals)
return avg, vals


Answer: the "wrong encoding" and "unauthorized" errors are swapped. This bug is easier to see in the refactored version, since the checks occur right as the errors are handled.

The refactoring technique shown above is known as guard clauses. A guard clause checks a criterion and fails fast if it is not met. It decouples the computational logic from the error logic. By removing the cognitive gap between error checking and handling, it frees up mental processing power. As a result, the refactored version is much easier to read and maintain.

Here are some rules of thumb for reducing nesting in your code:
  • Keep conditional blocks short. It increases readability by keeping things local.
  • Consider refactoring when your loops and branches are more than 2 levels deep.
  • Think about moving nested logic into separate functions. For example, if you need to loop through a list of objects that each contain a list (such as a protocol buffer with repeated fields), you can define a function to process each object instead of using a double nested loop.
Reducing nesting results in more readable code, which leads to discoverable bugs, faster developer iteration, and increased stability. When you can, simplify!
Categories: Blogs

But is it Automation?

Hiccupps - James Thomas - Thu, 06/08/2017 - 10:27

Recently, I needed to quickly explore an aspect of the behaviour of an application that takes a couple of text file inputs and produces standard output.

To get a feel for the task I set up one console with an editor open on two files (1.txt and 2.txt) and another console in which I ran the application this way:
$ more 1.txt; more 2.txt; diff -b 1.txt 2.txt
a b c d e f
a b c
d e f
1c1,2
< a b c d e f
---
> a b c
> d e f

$ more 1.txt; more 2.txt; diff -b 1.txt 2.txt
a b c d e f
a b c d e f

$ more 1.txt; more 2.txt; diff -b 1.txt 2.txt
a b c d e f
a b cd e f
1c1
< a b c d e f
---
> a b cd e f
As you can see I have a single command line that dumps both the inputs and the outputs. (And diff was not the actual application I was testing!)

After each run I changed some aspect of the inputs in the first console, pressed up and enter in the second console.

What am I achieving here? I have a simple runner and record of my experiments and an easy visual comparison across the whole set. It's quick to set up and in each iteration I'm in the experiment rather than the infrastructure of the experiment.

I could have, for example, created a ton of files and run them in some kind of scripted harness or laboriously by hand. But I was short of time and I wanted to spend the time I had on exploring - on responding to what I'd observed - and not on managing data or investing in stuff I wasn't sure would be valuable yet.

I still hear and see too much about manual and automated testing for my comfort. Is what I did here manual testing? Is it automation? Could a "manual tester" really not get their head around something like this? Could an "automation tester" really not stoop so low as to use something this unsophisticated?

Bottom line for me: there's a tool that is at my disposal to serve my needs at appropriate cost, with appropriate trade-offs, and in appropriate situations. Why wouldn't I use it?
Image: https://flic.kr/p/7VhPft
Syntax highlighting: http://markup.su/highlighter
Categories: Blogs

Dynatrace Managed feature update, version 120

Help us improve Dynatrace Managed

To better understand how you and your organization’s end users make use of Dynatrace Managed, we now provide you the option of sending Dynatrace usage data from your end-users’ browsers directly back to Dynatrace. We analyze this information to ensure that we focus our efforts on the aspects of Dynatrace that are most relevant to you and to identify areas where you may be having trouble understanding or using Dynatrace. Of course, privacy is a top concern. For complete details on the data we capture and how they are protected, see the Dynatrace privacy policy.

Easily switch license keys

For situations where your current Dynatrace license has expired and you’ve received a new license that you are to use going forward, it’s now possible to change license keys directly in the Dynatrace Managed UI. This is especially useful if you’ve been using a Dynatrace free trial license and have received a full license that you are to use going forward.

To update your Dynatrace Managed license key

  1. Select Licensing from the navigation menu.
  2. Paste your new license key into the License key field (see below) and click the check mark button to save the change.

Opt-out from managing firewall settings

In some situations, (for example, when a system is under certified change control) the automatic management of IP tables (iptables) that the Dynatrace Managed installer performs during upgrades may be problematic from a compliance perspective. This is why you can now opt-out of automatic iptable management by running the command-line option --firewall off

If you do opt out of automated iptable management, ensure that all ports required by Dynatrace are open and available. For full details, see all required port settings.

The post Dynatrace Managed feature update, version 120 appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Support for PHP and Staticfile apps on Cloud Foundry PaaS

We’re happy to announce that, in addition to support for Java and Node.js applications, Dynatrace now also provides monitoring support for PHP and Staticfile applications that are deployed in Cloud Foundry PaaS environments.

Cloud Foundry is a Platform-as-a-Service that consists of a set of open source tools that help you run applications at scale. Applications deployed on Cloud Foundry are usually run through technology-specific buildpacks that provide framework and runtime support for applications running on the Cloud Foundry platform. For instance, the Staticfile buildpack provides runtime support for applications that require no backend code other than an Nginx web server.

Dynatrace OneAgent for Cloud Foundry PaaS is integrated with release v4.3.34 of Cloud Foundry’s PHP buildpack and also with release v1.4.6 of Cloud Foundry’s Staticfile buildpack.

Start monitoring Cloud Foundry PaaS applications

To set up Cloud Foundry monitoring you first need to link your Dynatrace account with your Cloud Foundry applications. To do this, you need to create a Dynatrace service in your Cloud Foundry environment. For complete details, please see the Cloud Foundry installation guidelines.

Once your Cloud Foundry applications are monitored with Dynatrace OneAgent, you’ll receive the full range of application and service monitoring visibility that Dynatrace provides (for example, Smartscape and service-level insights with Service flow). Properties that are specific to Cloud Foundry are also provided on the process-group instance level. Note in the example below that values are provided for Cloud Foundry space IDCloud Foundry application, and—because multiple application instances are running—Cloud Foundry instance index.

Your feedback

We’d love to hear from you. Tell us what you think about the new Dynatrace integrations into the PHP and Staticfile buildpacks. Please share your feedback at Dynatrace Answers.

The post Support for PHP and Staticfile apps on Cloud Foundry PaaS appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today