In continuing my infra upgrade work, this weekend I'll be migrating JIRA to another server.
This will make upgrade more manageable and testable. The service will be disrupted for a few hours. Check out our @jenkinsci on Twitter for up-to-the-minute status.
Once the migration is done, the next step is to upgrade them.
The book starts with the promotion of Bill Palmer from Director of Midrange Operations to VP of IT Operations (albeit a somewhat reluctant promotion on Bills part). Bill quickly discovers that his newly inherited world of IT Operations is a bit of a disaster. The first part of the book describes in painstaking detail, the various issues Bill faces: production failures, too many audit findings, technical debt, organizations at each others’ throats, and fragile artifacts and systems. And on top of all of this, Bill is tasked with launching their next generation “bet the business” platform for ecommerce and point of sale systems, the Phoenix Project, which is already behind schedule and at risk.
The fun starts when Bill meets Erik, the mysteriously wisened “Obi Wan Kenobi”-like character that helps Bill find his way from chaos to DevOps nirvana. Erik leads Bill through a transformation via a number of breakthroughs that improve how the IT organization runs. They identify many of the root causes of the challenges they face such as:
- The volume of the Work In Progress (WIP) that is bottlenecked and where (or who) those bottlenecks are (and how reducing the work going to the bottleneck can help).
- The amount of unplanned work that impacts their operations (and how planned preventative work can help).
- The lack of real understanding of how the work flows and what the handoffs are (and how proper documentation, planning and Kanban boards can help here).
- The impact of audit and infosec requirements (and how correctly scoping these can REALLY help).
- The real amount of manual labor involved in every aspect of their operations (and how automation technologies can be super helpful here).
Erik gradually leads Bill to the vision of continuous delivery and how leveraging automation in the application development and delivery lifecycle can resolve a number of these issues by optimizing the flow of WIP, insuring application quality at each stage of the journey, and by guaranteeing that the environments and applications are the same across the stages of the lifecycle. Here’s how Erik puts it:
“Your next step should be obvious by now, grasshopper. In order for you to keep up with customer demand, which includes your upstream comrades in Development,” he says, “you need to create what Humble and Farley called a deployment pipeline. That’s your entire value stream from code check-in to production. That’s not an art. That’s production. You need to get everything in version control. Everything. Not just the code, but everything required to build the environment. Then you need to automate the entire environment creation process. You need a deployment pipeline where you can create test and production environments, and then deploy code into them, entirely on-demand. That’s how you reduce your setup times and eliminate errors, so you can finally match whatever rate of change Development sets the tempo at.”1
Erik is describing continuous delivery which is the application lifecycle management approach that is rapidly taking hold across industries. And interestingly, there are technologies out there today that are specifically designed to help you implement continuous delivery. Jenkins, the industry’s most popular continuous integration server is now being extended beyond the build and test stages to orchestrate full continuous delivery process. The Jenkins Workflow capability allows DevOps practitioners to create full Deployment Pipelines just as Erik describes. If the Phoenix project had leveraged Jenkins for continuous delivery from day one, then the project would have surely been more successful, but it would also mean that this book would have been much less interesting! As mentioned above, The Phoenix Project is a great read and should be required reading for all of us in this industry.
If you would like to learn more about how you can apply continuous delivery (CD) in your world (perhaps to ensure that no one writes a book about your software delivery disasters), come to the CD Summit World Tour this summer. In fact, Gene Kim, author of The Phoenix Project, will be a keynote speaker at the Washington DC and San Francisco events so bring your own copy of The Phoenix Project and have it autographed!
1) Kim, Gene; Behr, Kevin ; Spafford, George (2013-01-10). The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win (Kindle Locations 4373-4378). IT Revolution Press. Kindle Edition.
Senior Director, Product Marketing
Follow Dan on Twitter: @DanJuengst
The 2014 SyScan conference already ushered in a new era of testing by offering a $10,000 bounty for any tester who was able to remotely access a Tesla Model S’ automobile operating system. This latest Tesla testing escapade makes that one seem like child’s play. According to Gas2, as the Tesla Model X nears its […]
We’re happy to announce QA Wizard Pro 2015.1. This upgrade includes several new features and statements along with enhancements and bug fixes. A detailed list of the changes can be found in the release notes.
Here is a short list of the new features:
- Specify data sources for called and remote scripts
- Pass more parameters to .NET properties
- Perform actions with the new GetVariableValues, FirstRow, LastRow, and DecryptString statements
- Support for Mozilla Firefox 36 and 37, and Google Chrome 41
Take a look at all the new features in the QA Wizard Pro What’s New help.Don’t have QA Wizard Pro 2015.1 yet?
This week, Sauce Labs co-founder and Selenium creator Jason Huggins came to visit to chat about his leave of absence to help fix HealthCare.gov. For those who missed his talk at the Selenium meetup, we’re happy to report that we got our hands on a recording. Check out the video below to watch.In the video, Jason references a federal “digital playbook” that was written and published following the healthcare.gov overhaul. Point # 10 in the playbook – “Automate Testing and Deployments” – was suggested and driven by Jason. #TestAllTheThings
In late 2013, Selenium creator Jason Huggins joined President Obama’s “tech surge” team to help fix HealthCare.gov. In D.C. during the height of the crisis in November and December 2013, Jason had a behind-the-scenes view into a unique period in American history when a website’s quality (or lack thereof) had the attention of the nation, the press, the President, and Congress.
In this talk, Jason will share some of his stories from the HealthCare.gov turnaround and the “HealthCare 2.0″ effort in mid-2014. Jason will talk about the newly created U.S. Digital Services and how it was created out of the original HealthCare.gov crisis. He’ll also cover the U.S. Digital Services Playbook and what the role of automated testing and deployment will be in future U.S. Government projects.
Lastly, Jason will talk about opportunites for how Silicon Valley can help government build effective digital services in the future.
Jason is a software engineer living in Chicago. He started the Selenium project in 2004 at ThoughtWorks. He later joined Google to work on large-scale web testing for Gmail, Google Maps, and other teams. He left Google to co-found Sauce Labs as CTO to create a cloud-based Selenium service. In late 2013, Jason took leave from Sauce to help with the HealthCare.gov turnaround. He is also the creator of Tapster, a mobile app testing robot that’s been featured in Popular Science, Wired,Tech Crunch, and the MIT Technology Review.
uTest Test Team Leads (TTLs) provide an invaluable service to Applause Project Managers (PMs) and customers. As such, it is important we ensure that TTLs are able to conduct their work in an efficient and effective way. This week’s Platform Updates are focused around TTL workflows and productivity enablement. TTL: Mandatory value tier suggestions and […]
The post uTest Tester Platform: TTL and App Updates for April 23, 2015 appeared first on Software Testing Blog.
In the first part of this series, I shared practical advice on how to automatically deploy Dynatrace Agents into distributed enterprise applications using Ansible in less than 60 seconds. While we had conveniently assumed its presence back then, we will today address the automated installation of our Dynatrace Application Monitoring solution comprising Clients, Collectors and […]
The post How to Automate Enterprise Application Monitoring with Ansible – Part II appeared first on Dynatrace APM Blog.
This is an excerpt from my upcoming book, Fifty Quick Ideas To Improve Your Tests
As a general rule, teams focus the majority of testing activities on their zone of control, on the modules they develop, or the software that they are directly delivering. But it’s just as irresponsible not to consider competition when planning testing as it is in the management of product development in general, whether the field is software or consumer electronics.
Software products that are unique are very rare, and it’s likely that someone else is working on something similar to the product or project that you are involved with at the moment. Although the products might be built using different technical platforms and address different segments, key usage scenarios probably translate well across teams and products, as do the key risks and major things that can go wrong.
When planning your testing activities, look at the competition for inspiration — the cheapest mistakes to fix are the ones already made by other people. Although it might seem logical that people won’t openly disclose information about their mistakes, it’s actually quite easy to get this data if you know where to look.
Teams working in regulated industries typically have to submit detailed reports on problems caught by users in the field. Such reports are kept by the regulators and can typically be accessed in their archives. Past regulatory reports are a priceless treasure trove of information on what typically goes wrong, especially because of the huge financial and reputation impact of incidents that are escalated to such a level.
For teams that do not work in regulated environments, similar sources of data could be news websites or even social media networks. Users today are quite vocal when they encounter problems, and a quick search for competing products on Facebook or Twitter might uncover quite a few interesting testing ideas.
Lastly, most companies today operate free online support forums for their customers. If your competitors have a publicly available bug tracking system or a discussion forum for customers, sign up and monitor it. Look for categories of problems that people typically inquire about and try to translate them to your product, to get more testing ideas.
For high-profile incidents that have happened to your competitors, especially ones in regulated industries, it’s often useful to conduct a fake post-mortem. Imagine that a similar problem was caught by users of your product in the field and reported to the news. Try to come up with a plausible excuse for how it might have happened, and hold a fake retrospective about what went wrong and why such a problem would be allowed to escape undetected. This can help to significantly tighten up testing activities.Key benefits
Investigating competing products and their problems is a cheap way of getting additional testing ideas, not about theoretical risks that might happen, but about things that actually happened to someone else in the same market segment. This is incredibly useful for teams working on a new piece of software or an unfamiliar part of the business domain, when they can’t rely on their own historical data for inspiration.
Running a fake post-mortem can help to discover blind spots and potential process improvements, both in software testing and in support activities. High-profile problems often surface because information falls through the cracks in an organisation, or people do not have sufficiently powerful tools to inspect and observe the software in use. Thinking about a problem that happened to someone else and translating it to your situation can help establish checks and make the system more supportable, so that problems do not escalate to that level. Such activities also communicate potential risks to a larger group of people, so developers can be more aware of similar risks when they design the system, and testers can get additional testing ideas to check.
The post-mortem suggestions, especially around improving the support procedures or observability, help the organisation to handle ‘black swans’ — unexpected and unknown incidents that won’t be prevented by any kind of regression testing. We can’t know upfront what those risks are (otherwise they wouldn’t be unexpected), but we can train the organisation to react faster and better to such incidents. This is akin to government disaster relief organisations holding simulations of floods and earthquakes to discover facilitation and coordination problems. It’s much cheaper and less risky to discover things like this in a safe simulated environment than learn about organisational cracks when the disaster actually happens.How to make it work
When investigating support forums, look for patterns and categories rather than individual problems. Due to different implementations and technology choices, it’s unlikely that third-party product issues will directly translate to your situation, but problem trends or areas of influence will probably be similar.
One particularly useful trick is to look at the root cause analyses in the reports, and try to identify similar categories of problems in your software that could be caused by the same root causes.
Visit The Build Doctor for the full article.
The team is proud to announce the release of SonarQube 5.1, which includes many new features:
- New issues page & improved issue management
- New rules page
- Improved layout and navigation
- Simplified component Viewer
- All text files in a project imported
- Preview analysis timezone issue solved
Vast improvements in issue handling have gone into this version. First, there’s the replacement of the Issues Drilldown with the full power of the Issues page, contextualized to the current project.
Next are the long-awaited issue tags! Issues inherit tags from their rules, but the list is user-editable per issue.
Issue tags also come with a new widget, to show the distribution of issues in a project by tag:
Also on the long-awaited list is the ability to mark an issue “Won’t fix”. Choose that option from the dropdown and the issue disappears from issue counts and technical debt calculations at the next analysis.
Another key improvement is the automatic assignment of new issues to the last modifiers of the relevant lines. SonarQube user accounts are matched automatically to committers when possible, but it’s also possible to make those associations manually
And finally in the Issues Management area, the functionality of the Issues Report plugin has been moved into core, so you get those capabilities out of the box now.New Rules Page
The Rules page has also made the final step in its transition. Its new page structure will be familiar from the Issues page, and Rules now features the same powerful and intuitive search facets.
When you’re in a Rule Profile context, inheritance is now clearly displayed in the results list, and it’s easy to toggle your search between what is and is not activated in the profile.
The rule detail has been enhanced too, most notably by the addition of linked issue counts for each rule.
Improved Layout and Navigation
The first thing you’ll notice is that you’ve got more horizontal space for content, because we’ve removed the blue navigation bar on the left.
Global navigation is in the top menu and a sub-menu has been added for navigation within a project:
The new top menu features a home icon on the left – the SonarQube logo by default – which can be customized with your own logo.
And by default, the search menu (keyboard shortcut: s) now starts with your recently-used items:
We’ve also made the help menu more obvious. You could see it before with the ‘?’ keyboard shortcut, but now there’s an icon too.
The Component Viewer has been simplified in this version: there’s no more need to turn decorations on and off; it’s all on by default
And the “Show Details” option in the More Actions menu pulls up a display of all the file metrics.
It’s now possible to import all the files in your project. This allows you to have a fuller view of your project in SonarQube and to create manual issues on those files.
Preview Analysis Timezone Issue Solved
And finally, the timezone problem that kept people in different timezones than their SonarQube servers from performing preview analysis has been fixed. There’s not much to show for this point, but it’s significant enough to many to deserve a mention here.That’s All, Folks!
Time now to download the new version and try it out. But don’t forget that you’ll need Java 7 to run this version of the platform (you can still analyse Java 6 code), and don’t forget to read the installation or upgrade guide.
Anything and everything can piss them off.
Critical bugs will cause your users to never come back. Small bugs will chip away at your user’s experience. Neither will want to come back.
If you want people to download your app then you need a high app store rating. Anything short of five stars can deter users from downloading your app.
No amount of marketing can convince a user that your app is necessary in their lives. If you seek to create a necessity, then you need to focus on the product.
There are a lot of other great blog posts out there which talk about bugs you should avoid. Those posts will boost your app store rating if implemented, but many of them need an entire team’s focus and attention.
Instead lets focus on the small fixes and initiatives. These fixes might not bring your app from one star to five. But they will increase your app store rating, and weed out any common complaints.
1. Do Usability Testing
Usability testing can be an enormous task. You can dedicate entire teams to it. I don’t recommend you start there. But a little bit of your own usability testing can go a long way.
Users have expectations. Your app needs to make sure that it meets every single one of them. When you’re close to the app it’s easy to get lost in what you think makes sense.
You can get started in usability testing by yourself. Take your app, go up to someone who had no involvement in its creation and tell them to play with it. After that, just sit back and watch. If they ask questions, give vague answers. Never give them any information that could alter how they could be using the app. Their behavior is already altered from you being there and watching them.
One of my favorite tests is the five second test. The five second test is what it sounds like. You show the participant the screen for five seconds then ask them a question. Good questions can be: “What is the purpose of this app?” or “What stood out most to you?” This is useful for landing pages as well as app store description pages.
When a user lands on a page their attention span is about 2-3 seconds. This means you have 2-3 seconds before they forget about you. This test makes sure your page is as clear as possible.
Another great test is to explain the scope of your app and ask where they would click to get a task done. Based on this information you will be able to see how intuitive your app actually is.
Usability testing can be a large task. It is usually left to testers and designers to do, but these are some quick ways to get started. If you’re looking to get deeper into usability testing I would recommend looking into using Testlio. Our testers will point out any unnatural feeling or improvement in your app.
Your users will show you how they think through your app. Don’t try to force your users to think the way you want them to. That’s the quickest way to creating an upset user. After a few interviews, pass the information back to your developers and design team. It’s your company’s job to recreate how your users think, not force them to think like you.
2. Clear copy
If you want users to use your app, you need to be able to sell it without being there.
Your app needs to be able to sell itself through effective imagery and text. Users don’t come back to apps. In fact, 80-90% of users who download an app never open it back up again. Your first impression matters.
Must Read: How to impress your users on the first date.
When your users download your app, they need to understand its value. Your app’s copy sets the tone of your user’s journey. Your copy sets expectations. So write great copy.
Writing great copy is one of the most challenging tasks you can take on. What you think makes sense usually doesn’t. That’s why the best way to write great copy is to write none at all.
If you’re fortunate enough to have a group of power users, ask them the following questions:
1. How would you explain [app name] to a friend?
2. What do you use us for?
3. What features do you find the most valuable?
These three questions alone will solve your copy problems. You shouldn’t try to make up copy that you think your users will resonate with. Instead find an advocate and get copy straight from them. No one knows how to communicate with your potential users better than your current users.
3. Simplify forms
Forms have little to do with the value of your app. But too often, do I see comments saying “They ask for too much information” – One star. These reviews are the worst. They carry the same weight as a constructive one star review with none of the help.
There’s a time and place to ask for a lot of information. Most people ask at the beginning, but it doesn’t need to be that way.
Sometimes asking for more information is great for filtering out low-quality users. If so, then increase friction and you will have a high-desire user base
An effective method is to ask for as little information as possible. If you only need their e-mail, password, and photo then only ask for those three.
You may receive less data at the start, but you will increase conversions. As they become more invested in the app, you can ask for more information.
4. Create little surprises
Users love surprises.
They give a positive perception of your product to your users and it makes them feel good. Surprises can be in any form. If you’re looking to do a quick fix, notifications are the easiest to put in place.
Sending notifications and native looking pop ups have become easier. Mixpanel has a great notification tool. It allows you to create pop-ups for users who have reached a certain milestone in your app.
For example, if a user had just posted their tenth photo then they will receive a nice notification. This notification can tell them how great they are for posting.
This can be setup in minutes. If you don’t use Mixpanel for your analytics, Intercom does a great job at this as well.
When you say nice things to someone in person, it makes it difficult for that person to say anything bad about you. It’s no different in an app. By rewarding your users you make them feel special. When you make people feel special they are more likely to defend you.
Small efforts to your app can create tremendous results. Improving an app from a one star rating to a five star takes a significant team effort. But if you are looking to improve your app store rating then these quick fixes can go a long way:
1. Do usability testing
2. Write clearer copy
3. Simplify forms
4. Surprise your users
There are other quick fixes that you will likely find in your app. These are the methods that worked for me. If you happen to come across more, I would love to hear them. If you want to share them, reply in the comments below or tweet them to me @willietran_.
At some point in your life, you can probably recall a movie that you and your friends all wanted to see, and that you and your friends all regretted watching afterwards. Or maybe you remember that time your team thought they’d found the next "killer feature" for their product, only to see that feature bomb after it was released.
Good ideas often fail in practice, and in the world of testing, one pervasive good idea that often fails in practice is a testing strategy built around end-to-end tests.
Testers can invest their time in writing many types of automated tests, including unit tests, integration tests, and end-to-end tests, but this strategy invests mostly in end-to-end tests that verify the product or service as a whole. Typically, these tests simulate real user scenarios.
End-to-End Tests in Theory While relying primarily on end-to-end tests is a bad idea, one could certainly convince a reasonable person that the idea makes sense in theory.
To start, number one on Google's list of ten things we know to be true is: "Focus on the user and all else will follow." Thus, end-to-end tests that focus on real user scenarios sound like a great idea. Additionally, this strategy broadly appeals to many constituencies:
- Developers like it because it offloads most, if not all, of the testing to others.
- Managers and decision-makers like it because tests that simulate real user scenarios can help them easily determine how a failing test would impact the user.
- Testers like it because they often worry about missing a bug or writing a test that does not verify real-world behavior; writing tests from the user's perspective often avoids both problems and gives the tester a greater sense of accomplishment.
Let's assume the team already has some fantastic test infrastructure in place. Every night:
- The latest version of the service is built.
- This version is then deployed to the team's testing environment.
- All end-to-end tests then run against this testing environment.
- An email report summarizing the test results is sent to the team.
The deadline is approaching fast as our team codes new features for their next release. To maintain a high bar for product quality, they also require that at least 90% of their end-to-end tests pass before features are considered complete. Currently, that deadline is one day away:
Days LeftPass %Notes15%Everything is broken! Signing in to the service is broken. Almost all tests sign in a user, so almost all tests failed.04%A partner team we rely on deployed a bad build to their testing environment yesterday.-154%A dev broke the save scenario yesterday (or the day before?). Half the tests save a document at some point in time. Devs spent most of the day determining if it's a frontend bug or a backend bug.-254%It's a frontend bug, devs spent half of today figuring out where.-354%A bad fix was checked in yesterday. The mistake was pretty easy to spot, though, and a correct fix was checked in today.-41%Hardware failures occurred in the lab for our testing environment.-584%Many small bugs hiding behind the big bugs (e.g., sign-in broken, save broken). Still working on the small bugs.-687%We should be above 90%, but are not for some reason.-789.54%(Rounds up to 90%, close enough.) No fixes were checked in yesterday, so the tests must have been flaky yesterday.
Analysis Despite numerous problems, the tests ultimately did catch real bugs.
What Went Well
- Customer-impacting bugs were identified and fixed before they reached the customer.
What Went Wrong
- The team completed their coding milestone a week late (and worked a lot of overtime).
- Finding the root cause for a failing end-to-end test is painful and can take a long time.
- Partner failures and lab failures ruined the test results on multiple days.
- Many smaller bugs were hidden behind bigger bugs.
- End-to-end tests were flaky at times.
- Developers had to wait until the following day to know if a fix worked or not.
So now that we know what went wrong with the end-to-end strategy, we need to change our approach to testing to avoid many of these problems. But what is the right approach?
The True Value of Tests Typically, a tester's job ends once they have a failing test. A bug is filed, and then it's the developer's job to fix the bug. To identify where the end-to-end strategy breaks down, however, we need to think outside this box and approach the problem from first principles. If we "focus on the user (and all else will follow)," we have to ask ourselves how a failing test benefits the user. Here is the answer:
A failing test does not directly benefit the user.
While this statement seems shocking at first, it is true. If a product works, it works, whether a test says it works or not. If a product is broken, it is broken, whether a test says it is broken or not. So, if failing tests do not benefit the user, then what does benefit the user?
A bug fix directly benefits the user.
The user will only be happy when that unintended behavior - the bug - goes away. Obviously, to fix a bug, you must know the bug exists. To know the bug exists, ideally you have a test that catches the bug (because the user will find the bug if the test does not). But in that entire process, from failing test to bug fix, value is only added at the very last step.
StageFailing TestBug OpenedBug FixedValue AddedNoNoYes
Thus, to evaluate any testing strategy, you cannot just evaluate how it finds bugs. You also must evaluate how it enables developers to fix (and even prevent) bugs.
Building the Right Feedback LoopTests create a feedback loop that informs the developer whether the product is working or not. The ideal feedback loop has several properties:
- It's fast. No developer wants to wait hours or days to find out if their change works. Sometimes the change does not work - nobody is perfect - and the feedback loop needs to run multiple times. A faster feedback loop leads to faster fixes. If the loop is fast enough, developers may even run tests before checking in a change.
- It's reliable. No developer wants to spend hours debugging a test, only to find out it was a flaky test. Flaky tests reduce the developer's trust in the test, and as a result flaky tests are often ignored, even when they find real product issues.
- It isolates failures. To fix a bug, developers need to find the specific lines of code causing the bug. When a product contains millions of lines of codes, and the bug could be anywhere, it's like trying to find a needle in a haystack.
Unit TestsUnit tests take a small piece of the product and test that piece in isolation. They tend to create that ideal feedback loop:
- Unit tests are fast. We only need to build a small unit to test it, and the tests also tend to be rather small. In fact, one tenth of a second is considered slow for unit tests.
- Unit tests are reliable. Simple systems and small units in general tend to suffer much less from flakiness. Furthermore, best practices for unit testing - in particular practices related to hermetic tests - will remove flakiness entirely.
- Unit tests isolate failures. Even if a product contains millions of lines of code, if a unit test fails, you only need to search that small unit under test to find the bug.
Writing effective unit tests requires skills in areas such as dependency management, mocking, and hermetic testing. I won't cover these skills here, but as a start, the typical example offered to new Googlers (or Nooglers) is how Google builds and tests a stopwatch.
Unit Tests vs. End-to-End TestsWith end-to-end tests, you have to wait: first for the entire product to be built, then for it to be deployed, and finally for all end-to-end tests to run. When the tests do run, flaky tests tend to be a fact of life. And even if a test finds a bug, that bug could be anywhere in the product.
Although end-to-end tests do a better job of simulating real user scenarios, this advantage quickly becomes outweighed by all the disadvantages of the end-to-end feedback loop:
Simulates a Real User
Integration TestsUnit tests do have one major disadvantage: even if the units work well in isolation, you do not know if they work well together. But even then, you do not necessarily need end-to-end tests. For that, you can use an integration test. An integration test takes a small group of units, often two units, and tests their behavior as a whole, verifying that they coherently work together.
If two units do not integrate properly, why write an end-to-end test when you can write a much smaller, more focused integration test that will detect the same bug? While you do need to think larger, you only need to think a little larger to verify that units work together.
Testing PyramidEven with both unit tests and integration tests, you probably still will want a small number of end-to-end tests to verify the system as a whole. To find the right balance between all three test types, the best visual aid to use is the testing pyramid. Here is a simplified version of the testing pyramid from the opening keynote of the 2014 Google Test Automation Conference:
The bulk of your tests are unit tests at the bottom of the pyramid. As you move up the pyramid, your tests gets larger, but at the same time the number of tests (the width of your pyramid) gets smaller.
As a good first guess, Google often suggests a 70/20/10 split: 70% unit tests, 20% integration tests, and 10% end-to-end tests. The exact mix will be different for each team, but in general, it should retain that pyramid shape. Try to avoid these anti-patterns:
- Inverted pyramid/ice cream cone. The team relies primarily on end-to-end tests, using few integration tests and even fewer unit tests.
- Hourglass. The team starts with a lot of unit tests, then uses end-to-end tests where integration tests should be used. The hourglass has many unit tests at the bottom and many end-to-end tests at the top, but few integration tests in the middle.
Modern organizations today feel immense pressure to deliver better software faster, and this is no different in the mobile space. The best practice of Continuous Integration for web dev has been embraced for years as it is a proven mechanism that accelerates production cycles. However,mobile developers have been slow to adopt CI, despite needing a quick go-to-market plan.
In large part, this is because mobile brings with it a set of unique challenges that make implementation tough. Nevertheless, tools have evolved and mobile development teams now have many options to choose from to implement a solid mobile CI system.
In our next webinar, Kevin Rohling (Emberlight, Ship.io) and Kristian Meier (Sauce Labs) will cover best practices in implementing a mobile CI system and demonstrate how you can easily build, test, and deploy mobile apps.
This webinar will cover:
- What makes mobile CI so different
- Best ways to use emulators and simulators in testing
- Suggestions for CI tools and testing frameworks for mobile
Join us for this presentation on Tuesday, April 28 at 11am PDT/1pm EDT. There will be a Q&A with both Kevin and Kristian following the end of the presentation.
Click HERE to register today.
Want to get more mobile CI tips? Check out Kevin’s last blog post.
I have some exciting news -- The agendas have been posted for the Jenkins User Conferences (JUC) to be held at U.S. East (Alexandria, VA) and Europe (London). Take a look here to learn more about the talks, speakers and schedules.
As always, there is a great lineup of presenters ready to share their Jenkins stories: Peter Vilim will be presenting “Proving a First Class User Experience with Jenkins” at the U.S. East JUC, and Sander Kieft’s talk is called “Automating a Big Data Platform with Jenkins” at JUC Europe. Learn more about all 2015 JUC speakers and talks here. Explore the pages and see the who/what/where of all JUC 2015 locations!
You will see some familiar names and talks as well: Andrew Bayer will be presenting his very popular talk called “Seven Habits of Highly Effective Jenkins Users” at JUC Europe. Will Soula is returning this year to JUC U.S. East to “chat” about “Chat Ops and Jenkins.” Lorelei McCollum is also back with two talks at JUC U.S. East called “Jenkins 101” and “Getting Groovy with Jenkins.”
This year, you will notice a few differences in the JUC agendas. JUC is now a two-day conference in the U.S. East, Europe and U.S. West locations! Also, each session is assigned a category according to its content: Continuous Delivery, Best Practices, Operations, Plugins, Case Studies/War Stories and more. This will help you decide which talks to attend. You will also notice that several talks, especially in JUC Europe, reflect the industry’s growing interest in big data and Docker.
The agendas are still being finalized for JUC Israel and JUC U.S. West. If you are interested in speaking at either of these locations, you can still send in your talk proposals. The U.S. West deadline is May 3 and the Israel deadline is May 15.
JUC is such a great opportunity for the community to come together and network face-to-face. You can meet Kohsuke Kawaguchi, creator of the Jenkins project, Gene Kim, author of The Phoenix Project and DevOps expert, but you will also have the opportunity to meet Jenkins users, just like you, from all over the world. And this year, with the Jenkins project at well over 100K active installations, JUC as a whole will be the largest gathering of Jenkins users ever.
Early bird pricing for JUC U.S. East and Europe ends May 1, so REGISTER NOW to take advantage of the lower pricing.
As with every JUC, there is a great lineup of speakers eager to share their experience, expertise and knowledge with the Jenkins community: Martin Hobson will be presenting “Visualizing VM Provisioning with Jenkins and Google Charts” at U.S. East, and Pradeepto K. Bhattacharya’s talk is called “Orchestrating Your Pipelines with Jenkins, Python and the Jenkins API” at U.S. Europe. Learn the who/what/where of all 2015 JUC locations here.
In each agenda, you will notice some familiar names: Andrew Bayer will be presenting his popular talk called “Seven Habits of Highly Effective Jenkins Users” at JUC Europe. (You will have to race to get a seat at that session...it will fill up!) Will Soula is back this year to JUC U.S. East to “chat” about “Chat Ops and Jenkins.” Lorelei McCollum is also returning with two sessions at JUC U.S. East: “Jenkins 101” and “Getting Groovy with Jenkins.”
You will notice a few differences in the JUC agendas for 2015. In the U.S. East, Europe and U.S. West locations, JUC will be a two-day conference! Another change is that each session has been assigned a category according to its content: Continuous Delivery, Large Scale Jenkins Implementations, DevOps, Scalability and more. This will help you decide which talks to attend. You will also see that several talks reflect the industry’s heightened interest in big data and Docker, especially in the agenda for JUC Europe.
The agendas are still in the works for JUC Israel and JUC U.S. West. There is still time to submit a speaking proposal for either of these JUC locations. The U.S. West deadline is May 3 and the Israel deadline is May 15.
JUC is the perfect opportunity for the Jenkins community to come together and network in person. You will meet Kohsuke Kawaguchi, creator of the Jenkins project, Gene Kim, author of The Phoenix Project and DevOps expert, and if you are lucky you may also meet the butler! This year, with the Jenkins project at over 100K active installations, the 2015 JUC World Tour will be the largest gathering of Jenkins users on earth.
Early bird pricing is still available for JUC U.S. East and Europe until May 1, so REGISTER NOW.
Sponsorships for the 2015 JUC and CD Summit World Tour are still available for all locations! Show your support for the Jenkins community.