Thanks to those of you who joined us for our last webinar, Best Practices in Mobile Continuous Integration, with Kevin Rohling. The webinar covered topics like:
- What makes mobile CI so different
- Best ways to use emulators and simulators in testing
- Suggestions for CI tools and mobile testing frameworks
Missed the presentation, want to hear it again, or share with a colleague?
Listen to the recording HERE and view the slides below.
Best Practices in Mobile CI (webinar) from Sauce Labs
For this webinar, we took a pool from our listeners about mobile CI and we thought we’d share the results. 198 people answered the following questions:
What CI tools do you use for mobile testing?
a. I don’t use CI – 18%
b. Jenkins – 27%
c. Travis – 2%
d. Bamboo – 6%
e. Ship.io – 0%
f. Other – 6%
What % of your mobile functional testing is automated vs. manual?
a. All manual – 15%
b. 1 – 25% automated – 25%
c. 26 – 50% automated – 10%
d. 51 – 75% automated – 7%
e. 76% – 100% automated – 1%
What % of your mobile tests are on emulators vs. real devices?
a. 100% emulators / no real devices – 10%
b. Up to 75% emulators / up to 25% real devices – 12%
c. Up to 50% emulators / up to 50% real devices – 9%
d. Up to 25% emulators / up to 75% real devices – 18%
Mobile definitely has a long way to go to catch up to web app testing. We’ll be sure to keep sharing tips and tricks for optimizing your flow. Happy testing!
The Apple Watch has recently landed in the hands of thousands of users around the globe (including many of our own uTesters!). Because of this, it’s no surprise that the hot wearable is one of the most sought-after devices by our global customer base for testing coverage on their apps. For the week of April […]
The post Top 10 Paid Software Testing Projects at uTest: Week of April 27 appeared first on Software Testing Blog.
It’s true! We’re celebrating the fact that more than 250 million tests have been run on our platform! It’s crazy to think that we announced just over 100 million tests at the end of February, 2014. That’s an increase of 150% in just 14 months.
This time we thought we’d take a look at how our ecosystem has been growing as well, including our work with Appium, a cross-platform mobile test automation framework sponsored by Sauce Labs and a thriving community of open source developers.
Check out more Sauce-y stats in our celebratory infographic below. (Click to enlarge.)
A single resource will rarely teach you all you need to know or explain it in just the right way for your learning style or current understanding. Hack away at one of the sites then switch to another to both re-learn what you've been covering then learn new things.
Remember: study for 40 minutes per day for 30 days.
Tools to Code with
Online Reference Books
Game Making Tutorials
1. Testing didn’t hit the right combination of factors to trigger it.
Sometimes triggering a bug takes the perfect storm of the right (or wrong?) web browser, browser version, OS, screen dimensions, device…Because testing can never cover everything, it’s possible that you’ll never hit that specific bug-triggering combination. When that happens, a bug may slip through to production and stay hidden until a user discovers it “in the wild.”What you can do about it: Whenever possible and practical, test your application under several different combinations of conditions. Pay particular attention to the combination most commonly used by your customers.
2. It’s been there for a long time and has been forgotten.
Ah, the backlog! It’s supposed to allow us to keep track of all of the bug fix tickets so we can prioritize them and work them into the sprint or project plan.Although we have good intentions when we create a follow-up issue to resolve a bug, unfortunately sometimes the backlog is like the Island of Misfit Toys. These tickets are created, dropped in the backlog, and forgotten – and thus, their bugs are forgotten too.What you can do about it: Whether your team is Agile or not-so-Agile, make sure that someone treks through the backlog from time to time to make sure that the bugs filed there aren’t forgotten. You may even discover some that don’t exist anymore and can be closed!
3. Someone noticed it but didn’t speak up.
With the tension between testers and developers that plagues some companies, it’s not always easy to point out a bug. In fact, it can be downright intimidating. This is be even more true when the tester is new to the testing craft or new to the team, or when the developer is a highly respected senior programmer.If a tester notices a bug – or even just a possible bug – and doesn’t mention it, the chances of getting that issue resolved are pretty slim indeed.
What you can do about it: Foster an open, respectful, and approachable team culture. Make sure that all testers, regardless of experience level, know that they don’t need to be afraid to ask questions and point out things that don’t seem quite right.
4. Another bug obscured it. Few things are as effective at hiding a bug than another bug that prevents you from triggering it or reaching it in the first place.Say you’re testing a new feature. Things seem to be going well, but suddenly you reach a point where further testing isn’t even an option because of a bug. For example, maybe you need to click a link to actually use that new feature, but the link isn’t there at all.You’ll have no way of knowing that there’s a problem with how that new feature works on a mobile device if you can’t even get the new feature to appear. What you can do about it: Be sure to take thorough testing notes on what you did and didn’t cover in any given round of testing – you’ll help both yourself and the dev. Once that failed ticket is fixed and testable, start from the beginning and test it in full. What do you think are some of the common reasons bugs make it to production?Share your thoughts in the comments below.
- Added support for iOS 8.3
- Added support for Firefox 38
- Added support for Chromium web browser
- Added QtItem capability providing common attributes for Qt list/tree items and cells
Download the latest version of Ranorex.
(You can find a direct download link for the latest Ranorex version on the Ranorex Studio start page.)
This is a cross-post of Microsoft ALM web site.
Technical debt is the set of problems in a development effort that make forward progress on customer value inefficient. Technical debt saps productivity by making code hard to understand, fragile, difficult to validate, and creates unplanned work that blocks progress. Technical debt is insidious. It starts small and grows over time through rushed changes, lack of context and lack of discipline. Organizations often find that more than 50% of their capacity is sapped by technical debt.
SonarQube is an open source platform that is the de facto solution for understanding and managing technical debt.
Customers have been telling us and SonarSource, the company behind SonarQube, that the SonarQube analysis of .Net apps and integration with Microsoft build technologies needs to be considerably improved.
Over the past few months we have been collaborating with our friends from SonarSource and are pleased to make available a set of integration components that allow you to configure a Team Foundation Server (TFS) Build to connect to a SonarQube server and send the following data, which is gathered during a build under the governance of quality profiles and gates defined on the SonarQube server.
- code clone analysis
- code coverage data from tests
We have initially targeted TFS 2013 and above, so customers can try out these bits immediately with code and build definitions that they already have. We have tried using the above bits with builds in Visual Studio Online (VSO), using an on-premises build agent, but we have uncovered a bug around the discovery of code coverage data which we are working on resolving. When this is fixed we’ll send out an update on this blog. We are also working on integration with the next generation of build in VSO and TFS.
In addition, SonarSource have produced a set of .Net rules, written using the new Roslyn-based code analysis framework, and published them in two forms: a nuget package and a VSIX. With this set of rules, the analysis that is done as part of build can also be done live inside Visual Studio 2015, exploiting the new Visual Studio 2015 code analysis experience
The source code for the above has been made available at https://github.com/SonarSource, specifically:
We are also grateful to our ever-supportive ALM Rangers who have, in parallel, written a SonarQube Installation Guide, which explains how to set up a production ready SonarQube installation to be used in conjunction with Team Foundation Server 2013 to analyse .Net apps. This includes reference to the new integration components mentioned above.
This is only the start of our collaboration. We have lots of exciting ideas on our backlog, so watch this space.
As always, we’d appreciate your feedback on how you find the experience and ideas about how it could be improved to help you and your teams deliver higher quality and easier to maintain software more efficiently.
Simon Brown’s book, Software Architecture for Developers has been on my reading list for some time. I am aware of Brown’s talks that he gives at conferences, and his very good workshop on describing how to draw more effective diagrams as a communication mechanism for developers to other groups, but I wasn’t quite sure what his book was going to cover.
This weekend, whilst travelling, I had a bit of airport time to do some reading to plough through his book.
What I enjoyed about the book
Architecture is a touchy subject, and Brown doesn’t have any problems raising this as a contentious topic, particularly in the agile community where it doesn’t have an explicit practice. Some XP books explain the role, but mantras like “Big Design Up Front” and “Last Responsible Moment” are often (wrongly) interpreted as “do no architecture.” What I liked about Brown’s approach is his recognition of the Goldilocks approach – not too little and not too much where he provides both points of view and some concrete practices.
Brown covers important topics like quality attributes (Cross Functional Requirements), what the role of an Architect is (and that it is just a role, not necessarily a person). I am biased in the opinion but I enjoyed Brown’s perspective about whether or not architects should code, and it aligns well with my own point of view that for a Tech Lead (or Architect) to make effective decisions, they need to have empathy and understand (live, breath and sometime burn for) the decisions they make.
I appreciated the way that Brown puts “Constraints” and “Principles” as key factors that aren’t necessarily represented in the codebase and are unlikely to be easily discoverable for new people. Both are things that I have done when leading software teams and are things I would repeat because I find it helps people navigate and contribute to the codebase.
What I found slightly strange about the book
I believe the book is really strong but there were a few sections that seemed slightly out of place, or not yet completely finished. One was around the “Sharepoint projects needs architecture too”, which I don’t necessarily disagree with but could easily be extended to “Any software product extended to build an application needs architecture too” (cue s/Sharepoint/CMS/g or other examples).
Software Architecture for Developers is a very accessible, relevant and useful book that I do not have any problems recommending for people looking at how to effectively implement Software Architecture in today’s environment.
There are many advantages of breaking an application into smaller services. When APIs and Interfaces are well defined it allows more independent development on a separate code base, keeping risk low to break the whole app with a single code change. It allows for more flexible and scalable deployments when done right and it is […]
The post Identify Bad Service Oriented Architectures Through Metrics appeared first on Dynatrace APM Blog.
Google and Apple are seasoned adversaries at this point, with each company constantly threatening to move in on each other’s territory and steal market share. For example, Google has gotten involved with device manufacturing (see: Google Nexus Tablets), an area which has historically been Apple’s bread and butter, and Apple has engineered a Maps application, […]
The post Apple or Google: Who Will Reign Supreme With Mobile Payments? appeared first on Software Testing Blog.
To accelerate time to market and cut costs, many product development teams can take requirements written for similar projects and reuse them for a new project. Not sure how to get started with requirements reuse? Our newest guide, 6 Best Practices for Requirements Reuse, can help!
This guide provides an overview of the following key reuse practices:
- Document the requirements
- Tune up existing requirements
- Begin with the end in mind
- Avoid excessive granularity
- Develop a pattern
- Link the dependencies
These best practices can help your team achieve time- and cost-savings goals without sacrificing product quality. The tool you use to write and manage requirements can help too. For example, TestTrack includes handy features to help you reuse requirements. We hope you’ll find this free guide helpful and learn some new ways to get the most out of your requirements.Get the Guide
The post New Guide Provides 6 Best Practices for Requirements Reuse appeared first on Blog.
This is a guest post by Greg Sypolt, a Senior Engineer at Gannett Digital and automated testing expert.
Technological advancements and the explosion of devices across multiple platforms means hardware and software developers have a much more difficult job. They have to keep up with the demand to develop and roll out new products. One of the most significant issues is accounting for the differences in system response, when responding to mobile traffic rather than to internet traffic.
Applications must be tested to make certain they run responsively on key platforms and across numerous networks. Effective functional testing eases the pressure on device manufacturers while allowing application developers the time to collect applicable metrics that improve product quality.
A New Set of Challenges
Testing mobile applications is completely different and significantly more complicated than testing traditional desktop and net applications. Mobile devices don’t just emulate the desktop environment — they have their own set of requirements. Mobile app testing is far trickier because of the following key aspects.
Mobile applications run on devices that have different:
- Operating Systems (OS) such as iOS, Android, Windows and BB;
- Versions of those OS; and
- Manufacturers such as Apple, Samsung, Nokia, Motorola and LG.
When an application needs to run on multiple OS, devices, and varied screen sizes, QA teams are faced with the challenge of ensuring the application functions in every environment. The below graphic of the iOS support matrix alone shows the complexity with a single series of devices and OS type.
Availability of Mobile Testing Tools:
Desktop and web application-based testing tools cannot be used for testing mobile applications, so a new testing framework is required. Some of the frameworks currently available for writing test scripts are:
You must be able to effectively emulate various bandwidth rates, because your end users will be operating on a variety of networks and bandwidths. You also must be able to test geographically isolated loads accurately to signify real-world traffic.
Screen Size and Densities:
With the diversity of screen sizes and densities available today, you must also be able to test your mobile app on different screen configurations so that it:
- Fits on small screens;
- Takes advantage of the additional space on larger screens while still looking good on smaller screens. For example, the difference in screen size between iPhone 4 and iPhone 6+;
- Is optimized for both landscape and portrait orientations.
When testing in mobile, there are physical elements that also need to be considered, unlike with web applications. You may need to consider wireless and wired peripheral mode testing.
- Wireless to the device such as near field communication (NFC), Bluetooth or stylus; and
- Wired internal to the device such as a headphone jack or external testing with cc reader or bar code scanner;
- Native Applications and Browser-Based Applications.
These are the variables that come with a mix of native and browser-based applications. Native applications reside on the user’s device and communicate over HTTP(s). The browser-based application uses a modified version of a browser to access applications online. As many companies use both applications to offer solutions to their customers, you must support testing of all types of mobile applications.
Taking Challenges on “Strategy First”
What do these testing challenges mean for web developers and site owners? Primarily, for every web application designed, you must also address a strategy to test the product in the mobile space. Building an app with all the features and functionality needed by the client and user is important. Having a rigorous mobile testing plan in place before the mobile app is deployed is even more crucial to its success.
Mobile applications are becoming more and more sophisticated, significantly increasing the requirement for functional testing. To tackle this, organizations that require app testing are always exploring alternatives to traditional manual testing.
Mobile automation testing is a highly effective approach to mobile app QA. It provides significant business returns when executed using the right tools and infrastructure, while factoring in cross-platform challenges. From sites to web applications to native mobile applications, test automation tools bring full-featured functional testing to mobile platforms.
Strategy for Mobile Test Automation
Automated testing can sometimes be a black hole. The best thing to do is automate something and measure that against a precise objective that can be measured in a realistic timeframe. Knowing your hard or soft Return on Investment (ROI) goals helps as well.
Mobile testing requires a balanced approach, and the key is understanding your company’s mobile strategy.
Target Device Selection:
You simply cannot test applications on every device that exists. However, cloud-based device emulation tools allow you to increase that breadth more and more. The best approach is analyzing the market and choosing a representative device that reduces the effort of executing multiple test cases. A few factors to consider are OS version, screen resolution and form factor (smartphone and tablets) while also ensuring the multi-device and multi-platform compatibility of the app.
Emulators mimic the software and hardware environments found on actual devices and provide excellent options. Options like the ability to bypass the network and use a real-world environment via modem where actual users run and interact with those applications on their devices. At the same time you will need to test on a physical device, but this can usually be done more ad-hoc for most applications. Find the right mix of emulators and physical devices to provide the best results!
Tool Selection Criteria:
Test automation tools create a framework to systemize the testing of mobile native and web apps across platforms. For instance, Appium is one of the few frameworks that can develop cross-platform scripts in multiple scripting languages. Collaboration between iOS and Android developers is crucial to ensuring every element that looks the same has the same accessibility label and builds a testable app.
An important step in this process is creating a list of requirements to review when choosing a tool for evaluation. Some questions to ask as you determine your requirements are:
- Am I looking for a Behavior Driven Development (BDD) framework?
- Do I need to support native, hybrid, mobile first (responsive web design) or mobile web?
- Do I need to support diverse mobile platforms such as Android, iOS or Windows?
- Am I looking to test locally or in the cloud to reduce the cost of ownership?
- Am I looking to test against emulators or real devices?
- Do I need a framework that supports cross-platform scripting?
- Do I need a framework that offers an easy interface for tests to auto-generate scripts? For instance, scripts can be automated without any programming or scripting language knowledge.
- Am I practicing continuous integration and need a tool that integrates into the broader environment seamlessly?
- Am I looking for a framework that will attract existing developers to contribute to my automation goals?
Take a step back and consider your resources. Does your QA team have sufficient programming knowledge for automation development? For automation, you must have people with some programming knowledge. If not, do they have the technical capabilities to easily adapt to the new technologies?
Answers to these questions will help guide your team to picking the best framework, and also understand the capabilities of the team to execute on testing.
Automation environment and setup depends on the approach to testing. Automation testing approaches are either cloud-based or local.
Cloud-based testing provides web-based automation platforms that can be accessed from anywhere in the world with good internet connectivity. It is one standard way in which one can achieve native and hybrid types of test automation. And the “automation-from-anywhere” feature is a big advantage. You can run scripts from your test framework on most of the cloud solutions.
Local-based testing involves setting up tools in a test environment and leveraging either emulators/simulators or physical devices to automate testing using popular open-source tools such as Appium, Espresso, or Kif. The additional consideration with on-premise testing is the breadth you can cover, and the time to set up these environments.
Mobile Testing, The Practice
Though the techniques and tools used to automate mobile application testing are complex, we learn a lot from the days of client/server desktop applications. But a few extra mechanisms are required for mobile automation, and these are:
- Using a mobile test framework such as Espresso or Kif makes it easy for developers and testers to write scripts in the native programming language and opens the door for pair programming;
- Identify the requirements and categorize them based on the mobile application type—Native, Hybrid or Mobile Web;
- Identify the scope and device that meets the requirement;
- Identify an automation tool that will a best fit. Filter the best tools by performing Proof of Concept (POC) that would prove that the automation can produce real results;
- Design the framework architecture based on the initial requirement.
Test automation strategy runs in parallel with the framework design, which details the technical scope, test environment, and running scripts on emulators/physical device with appropriate automation based approach.
Mobile Device Matrix:
In addition to the testing mechanisms, you have to consider the matrix of device type, device OS, and device browsers to test against. The size of this matrix is significantly larger than in web application testing because of the dependency of device types since each have different screen resolution possibilities and different features. For example, the new thumb print authentication in some iPhones is not available in others, but applications can leverage this feature. Templates for building a mobile automation device matrix require:
- Operating Systems – customizations, missing libraries, driver issues;
- Screen Size – rendering issues, usability, missing layouts;
- Pixel Density – density independence, missing layouts;
- Aspect Ratio – X,Y calculations, overlapping panels, display issues;
- System on a Chip (SoC): hardware performance, instruction set, battery signal;
- Carrier: network protocol, speed, responsiveness, packet loss.
To build a successful test mobile automation strategy, all stakeholders must understand the business value of automation. Further, all teams, including development, need to buy in to the process. Picking the right automation tool and building the right testing environment are typically the most difficult challenges when implementing a successful test automation process. However, you do need to focus on the long-term success of your product, so it’s helpful to address these important questions when starting the process:
- What are your automation objectives?
- Will your current development processes need to change?
- What test environment support do you need?
- What skills will you need?
Process and Organization + Environment + Technical + Resources + Scope and Roadmap = Test Automation Strategy.
Mobile testing tools are relatively new, and developing an automation strategy specific to your organization ensures that you will get the most business value out of your automation tool. A crucial piece to laying the groundwork for mobile app testing is building a well-balance testing portfolio that includes automated unit testing, integration testing, WebView testing, automated UI testing and exploratory manual app testing of the UI. Appium shows great promise in this area.
Making the leap into mobile testing today is not easy. The good news is a there are many tools that are being developed that support mobile testing efforts, but the biggest part of the challenge is integrating your testing into your existing environment. The important thing to know is that just because it is hard does not mean you should reduce your efforts compared to your web application. This would result in a disconnect of software quality, and could ultimately destroy your mobile application strategy from within.
By Greg Sypolt
Greg Sypolt is a Senior Engineer at Gannett Digital with 10 years of focus on project quality, results, and customer satisfaction while serving in multiple leadership roles. He is an expert in all areas of testing – including functional, regression, API, and acceptance testing – in both the automated and manual realm. Greg is experienced with test strategy planning, including scope, estimates, and scheduling. Greg also assumed the role of Automation Architect while helping convert a large scale, global test team from a manual to an automated focus.
Our .NET community is not one to sit still (and we love you all for that!). Today we wanted to celebrate two very busy .NET community members that keep us on our toes with all the new skills they learn and teach to us. Thank you for all you do.Roberto Freato
Roberto Freato combines his two passions – computer science and working independently – as a freelance IT consultant and trainer. Whether teaching, writing or speaking, Roberto shares his affinity for software architecture and development, prototyping, analysis, training, and improving the customer relationship.
Roberto has been a Windows Azure MVP since 2012 and is the co-author of Microsoft Azure Development Cookbook. He maintains additional certifications with Sun, EXIN, Apple, Cisco, and IBM. Roberto’s other interest include Windows Phone, cloud computing, mobile, ASP.NET/IIS, developer security and .NET (over 40 and counting!).
Keep up with Robert on his website.Vidya Vrat Agarwal
As a .NET purist, blogger, community speaker, and author, Vidya Vrat Agarwal works as a .NET consultant. With over 14 years experience, Vidya loves contributing to the .NET community through his consulting work, his blog, and as a .NET MVP.
Vidya has also been recognized as a C# Corner MVP and lifetime member of the Computer Society of India (CSI). Outside of .NET, his specialties include architecting and building solutions for Win Forms, ASP .NET, MVC, SQL Server/BI, WCF, SOA, Windows Azure, SDL, MSF, Agile-scrum, TOGAF, Big Data, and Hadoop.
Stay connected with Vidya on twitter @dotnetauthor.
Janet Gregory and I enjoyed participating in the Quality in Agile conference in Vancouver April 20-21. We paired on a keynote: “Do testers need to code… to be useful?” Our opinion in a nutshell: testers need technical awareness to collaborate effectively with all their team members, but our software delivery teams already should have expert coders!
Even if they don’t write code, testers need to participate in automating regression tests and other useful automation, in collaboration with programmers, business stakeholders and others. Janet and I facilitated an all-day workshop on advanced topics in agile testing, with a focus on automation.
Challenges around automation
After some introductory slides, the 15 workshop participants self-organized into three smaller groups, choosing to sit with people that had similar goals for the day, or who had experience related to their goals. Each person listed their team’s impediments to automation, one per sticky note, and we grouped these on a big wall chart.
Next, everyone dot voted on the topics they wanted to tackle during the workshop. The top three vote-getters were:
- Culture and responsibility – whose job is it to automate?
- Lack of time for automation activities
- Things that make tests hard to automate, such as complexity
(Note, you can find higher resolution photos of all the session wall charts, including those not on this post.)
Formulating the problem and brainstorming ideas to overcome it
Each group was tasked with writing their own problem statement for the culture and responsibility topic. It is challenging to write a good problem statement! You can see an example at the bottom of the mind map at left. Once the problem was defined, everyone picked up a Sharpie and each team mind mapped on their big piece of easel pad paper.
One group focused on a lack of shared vision and investment in automation at the company level. Another saw a lack of education on both sides.
I thought it was interesting that the third group exploring culture and responsibility mentioned doing social activities together, and honing soft and tech skills including being respectful of each other.
For topic #2, after each group wrote their problem statement around the lack of time for automation, we tried a different brainstorming technique: Brainwriting. Each person wrote their ideas for dealing with complexity and other things that make automation difficult on a plain piece of paper. Every three minutes, they passed their paper to the group member to their right. They read what was written already, then wrote more ideas. This continued until each person within the group had written on each paper. Most people agreed that reading other peoples’ ideas jogged new ones for themselves. This technique lets people who might not be comfortable coming forward to draw on a mind map or say their ideas aloud contribute equally.
For topic #3 (sample problem statement to the left), we did “brainwriting with a twist”, an idea of Janet’s. Each team started by drawing, mindmapping or brainwriting ideas on a big flip chart page. After 10 minutes, each group moved to the next group’s flip chart, read the problem statement and ideas, and added their own. Some specific ways to design better automation code came out of this, as well as ideas for better tester-coder collaboration and ways to make these problems more visible.
Of course, there is more to problem solving than brainstorming ideas. Janet presented a model of Esther Derby’s (left): define the problem and desired outcome, understand the context and requirements around potential solutions, design experiments, try them and evaluate the results. Each team spent time coming up with experiments they will try when back with their own teams. We hope that participants will report back to us on how their experiments went!
Got automation challenges – or any challenges related to quality and testing, for that matter? Get your team together, try out some brainstorming techniques, make it comfortable and safe for each person to contribute their ideas. Identify the biggest problem, brainstorm a couple of experiments to try to make that problem smaller, and use your retrospectives to evaluate the results. Keep experimenting, inspecting and adapting. Over time, your problems will be smaller and your successes bigger. But remember to celebrate even the small successes!