Brain Rules: 12 Principles for Surviving and Thriving at Work, Home and School by John Medina – A description of rules with how our brain works and how we learn. Our visual senses tend to trump our sense of smell. We need sleep to restore our energy and to help us concentrate. Spaced repetition is important, but assigning meaning to new words and concepts are also important to learning. Since I’m fascinated with learning and how the brain works, I’ll add this to my reading list.
Getting Things Done: The Art of Stress-free Productivity by
David Allen – Although I never read the book, I felt like I follow a similarly described organisation system. The GTD method is almost like a cult, but requires a lot of discipline for it. Unlike keeping a single list of things to do, they have a systemised variant for keeping long-lived projects and ways of managing tasks to help you focus on getting through actions. Probably a good book if you want to focus more on breaking things done into smaller steps.
The Checklist Manifesto: How to Get Things Right by Atul Gawande – With lots of examples from the healthcare industry, a reminder that useful checklists can help us avoid making simple mistakes. For me, the idea of standardised work (a lean concept) already covers this. I agree with this idea in principle, but I’m not so sure the book covers the negative side effects of checklists as well (people getting lazy) or alternatives to checklist (automation and designing against error/failure demand to be begin with).
Connect: The Secret LinkedIn Playbook to Generate Leads, Build Relationships, and Dramatically Increase Your Sales by Josh Turner – Either a terrible summary or a terrible book, this blink gave advice about how to use LinkedIn to build a community. Although the advice isn’t terrible, it’s not terribly new, and I didn’t really find any insights. I definitely won’t be getting a copy of this book.
Start With Why: How Great Leaders Inspire Everyone To Take Action by Simon Sinek – A nice summary of leadership styles and rather than focusing on how something should be done, and the what, is starting with the why. I liked the explanation of the Golden Circle with three concentric circles draw within each other, with the Why being the starting point that leads to the How that ends in the What. It’s a good reminder about effective delegation and how powerful the Why motivator can be. I’ve added this book to my reading list to.
Perception drives end-user experience. Ryan Bateman and I came across a very interesting article by Bryan Gardiner in Wired Magazine describing some of the science around waiting for a page to load. At Dynatrace we are REALLY passionate about this and got to chatting about it. We constantly talk to customers about best practices when […]
The post Perceived render time – You take the blue pill; you believe whatever you want! appeared first on about:performance.
The worst time for your website to go down is during a high demand period. Summer is one of the busiest travel seasons, and when a website is unavailable the impact is incredibly detrimental. Keep reading to find out how to improve your website performance.
The worst time for your website to go down is during a high demand period. Summer is one of the busiest travel seasons, and when a website is unavailable the impact is incredibly detrimental. Keep reading to find out how to improve your website performance.
Hey there, my name is David Hinske and I work at Goodgame Studios (GGS), a game development company in Hamburg, Germany. As Release Engineer in a company with several development teams, using several Jenkins instances comes in handy. While this approach works fine in our company and gives the developers a lot of freedom, we came across some long-term problems concerning maintenance and standards. Problems, which where mostly caused by misconfiguration/non-usage of plugins. With “configuration as code” in mind, I took the approach to apply static code-analysis with the help of SonarQube, a platform that manages code quality, for all of our Jenkins-Job-configurations.
As a small centralized team, we were looking for an easy way to control the health of our growing Jenkins infrastructure. With considering “configuration as code”, I developed a simple extension of SonarQube to manage the quality and usage of all spawned Jenkins-instances. The given SonarQube features (like customized rules/metrics, quality profiles and dashboards) allow us and the development-teams to analyze and measure the quality of all created jobs in our company. Even though a Jenkins configuration analysis can not cover all SonarQube’s axes of code quality, I think there is still potential for conventions/standards, duplications, complexity, potential bugs (misconfiguration) and design and architecture.
The results of this analysis can be used by all people involved in working with Jenkins. To achieve this, I developed a simple extension of SonarQube, containing everything which is needed to hook up our SonarQube with our Jenkins environment. The implementation contains a new basic-language “Jenkins” and an initial set of rules were defined.
Of course the needs depend strongly on the way Jenkins is being used, so not every rule implemented will be useful for every team, but this applies as well as all other code-analysis. The main inspiration for the rules were developer feedback and some articles found on the web. The different possibilities to use and configure Jenkins provides a lot of potential for many more rules. With this new approach of quality-analysis, we can enforce best practices like:
- Polling must die (Trigger a build due to pushes instead of poll the repository every x minutes)
- Use Log Rotator (Not using log rotator can result in disk space problems on the master)
- Use slaves/labels (Jobs should be defined where to run)
- Don’t build on the master (In larger systems, don’t build on the master)
- Enforce plugin usage (For example: Timestamp, Mask-Passwords)
- Naming sanity (Limit project names to a sane (e.g. alphanumeric) character set)
- Analyze Groovy Scripts (For example: Prevent System.exit(0) in System Groovy Scripts)
Besides taking control over all configuration of any Jenkins instance we want, there is also room for additional metrics, like measuring the amount and different types of jobs (Freestyle/Maven etc…) to get an overview about the general load of the Jenkins instance. A more sophisticated idea is to measure complexity of jobs and even pipelines. As code, job configuration gets harder to understand as more steps are involved. On the one hand, scripts, conditions and many parameters can negatively influence the readability, especially if you have external dependencies (like scripts) in different locations. On the other hand, pipelines can also grow very complex when many jobs are involved and chained for execution. It will be very interesting for us to see where and why complex pipelines are being created.
For visualization we rely on the data and its interpretation of SonarQube, which offers a big bandwidth of widgets. Everybody can use and customize the dashboards. Our centralized team for example has a separate dashboard where we can get a quick overview over all instances.
The problem of “growing” Jenkins with maintenance problems is not new. Especially when you have many developers involved, including with the access to create jobs and pipelines themselves, an analysis like this SonarQube plugin provides can be useful for anyone who wants to keep their Jenkins in shape. Customization and standards are playing a big role in this scenario. This talk surely is not an advertisement for my developed plugin, it is more about the crazy idea of using static code analysis for Jenkins job configuration. I haven’t seen anything like it so far and I feel that there might be some potential behind this idea.
Join me at my Enforcing Jenkins Best Practices session at the 2016 Jenkins World to hear more!
This is a guest post written by Jenkins World 2016 speaker David Hinske. Leading up to the event, there will be many more blog posts from speakers giving you a sneak peak of their upcoming presentations. Like what you see? Register for Jenkins World! For 20% off, use the code JWHINMAN
Blog Categories: Jenkins
What’s your most memorable “blame the network” anecdote? If you’re in network operations, you will likely have many from which to to choose. After all, doesn’t the network always get blamed first? To be fair, other teams often feel the same. Citrix and VMware admins have alternately been touted as “the new network guy,” and as […]
The post Because you can’t always blame network operations…or the network! appeared first on about:performance.
If you’re attempting to implement an Agile/Scrum development process where none has existed before, you will surely an encounter a moment of frustration on the part of your developers. “Why do we have to do these standups?” “I don’t understand why we need to assign story points, can’t we just get to the projects?” “Where is my technical specification?” Like Ralph Macchio in The Karate Kid, your developers may wonder why you have them doing the engineering equivalent of “wax on, wax off,” when what they really want to do is get into the fight. What Ralph Macchio eventually understands is that the performance of rote, rigid external exercises is a first step on the road to internal mastery, a process well known in the world of martial arts as Shu Ha Ri.
In its broader definitions, Shu Ha Ri describes a process of learning: in the Shu stage, the learner follows directions literally and adheres rigidly to whatever rules the teacher has set. In the Ha stage, the learner begins to see how the rules and directions can be adapted for specific situations, and exercises some judgement in how they should be applied. In the Ri stage, the learner has developed her own techniques, and now innovates freely as the situation demands.
Martin Fowler and Alastair Cockburn have written about the role of Shu Ha Ri as it applies to Agile development, but we could characterize the three stages as rigid adherence to the principles and ceremonies of Scrum, followed by what I like to call “pragmatic Scrum” that adapts to the styles and situations of individual teams, which then culminates in true Agility in approaching projects, challenges, and the process itself. The most important thing to take away from the application of Shu Ha Ri to software development, however, is that it is about the internalization of principles, followed by an understanding of their application, which leads finally to innovation in how problems and projects are approached. This is in sharp contrast to other methodologies, like traditional waterfall, that are simply about the imposition of schedules and rules that leave teams stuck in an eternal Shi limbo. It’s difficult to imagine that these teams would experience much satisfaction with their position, much less be capable of innovation.
It’s now been six months since we adopted Scrum at Sauce Labs. We’ve had our Shu period, and, as expected, it was a difficult time. As we implemented Scrum, there were many moments of frustration, questions about why, and some resistance to what were perceived as pointless rituals. It didn’t take long, though, before we had moved into pragmatic Scrum. The teams began to better understand their own abilities, and how to incorporate and adapt Scrum practices to the way they work together. And now, I’m guardedly optimistic that we are entering into Ri, as evidenced by the project to open our data center in Las Vegas. This was a true DevOps project, in that there was no easy separation between development requirements and operational requirements, and it required the cooperative efforts of many teams to accomplish. It also required that teams who had adopted and adapted Scrum learn how to make their particular version fit in with that of their colleagues – they had to take what they had learned, in other words, and improvise upon it. Had they not been able to do this, I have no doubt that we would never have been able to accomplish this monumental task, that had so many dependencies and inter-dependencies. In any traditional project management approach, we would no doubt still be writing the specifications, rather than delivering significantly improved performance to our customers. To paraphrase Mr. Miyagi, first we had to learn how to “stand up,” then we learned how to fly.
Joe Alfaro is VP of Engineering at Sauce Labs. This is the sixth post in a series dedicated to chronicling our journey to transform Sauce Labs from Engineering to DevOps. Read the first post here.
The RanoreXPath is a powerful identifier of UI elements for desktop, web and mobile applications and is derived from the XPath query language. In this blog we will show you a few tips & tricks on how to best use the various RanoreXPath operators to uniquely identify UI elements. You can then use these RanoreXPaths in your recording and code modules to make your automated tests more robust.
Using RanoreXPath operators
- Search for multiple button elements
- Identify controls with a specific attribute
- Identify checkboxes by combining attributes
- Recognize related elements using the parent operator
- Recognize related elements by using preceding- and following-sibling
- Identify attributes fields using regular expressions
- Identify attributes with dynamic values
The Ranorex Spy displays the UI as hierarchical representation of elements in the Element Browser view. The RanoreXPath can be used to search and identify items in this UI hierarchy.
In this example, we’ll use the tool KeePass as application under test (AUT). This open source password manager application is one of our sample applications delivered with Ranorex Studio. If you have multiple applications open, Ranorex Spy will list them all. Filtering the application you want to test will increase speed and give you a better overview. To do so, track the application node of KeePass and set it as root node (context menu > ‘Set Element as Root’). Now, only the main KeePass form and its underlying elements are visible.
General Layout of RanoreXPath
RanoreXPath expressions are similar to XPath expressions. They share both syntax and logical behavior. A RanoreXPath always consists of adapters, attributes and values:
The adapter specifies the type or application of the UI element. The attribute and values specify adapter properties.
The absolute RanoreXPath of our KeePass form looks like this:
The form is an adapter specifying the type or classification of the UI element. It is followed by the attribute value comparison, which identifies the requested element. In this example, the comparison operator is a simple equality.
If you want to know more about how the RanoreXPath works, we recommend our dedicated user guide section.Search for multiple button elements
You can list all buttons elements that are direct children of a designated positon in your AUT. Have a look at these two examples:1. List all buttons that are direct children of the KeePass toolbar:
To do so, simply set the toolbar as root node and type ./button into the RanoreXPath edit field, directly after the given RanoreXPath.
This will create a relative path to all child nodes of the actual node, which are buttons.
2. List all buttons of your AUT:
Navigate back to the form adapter, set it as root node and type in .//button.
You’ve now created a relative path to all descendants of the actual node, which are buttons. These are all buttons of all levels of the subtree of the current element.
Identify controls with a specific attribute
You can also create a path to controls, to filter them according to specific attributes. In this example, we want to find all checked checkboxes.
Open the “Find” dialog in KeePass (<CTRL><F>), as this dialog contains checkboxes, and set it as root node. Now, you can validate which item of the checkbox control has the attribute “checked” set to true. To do so, enter “//checkbox[@checked=’True’]”:
As you can see, only the checked checkboxes will be visible in the Element Browser.
Identify checkboxes by combining attributes
You can further extend the previous example by combining attributes. This enables you to, for example, omit certain items from the search, or search for specific items.1. Omit a specific item from the search
You can omit a specific item from the search using the “not equal” operator and the “and” conjunction. In this case, we want to omit the item “&Title”:
2. Seach for specific items
You can use the “or” instead of the “and” conjunction to extend your search and only look for specific items. Extend the checkbox search to look for the items “&Title” and “&URL”:
Recognize related elements using the parent operator
After running the Ranorex desktop sample project, there will be two entries in our AUT – one for a WordPress and one for a Gmail account. In this case, we’d like to find the username of the “Gmail” KeePass entry:
Start with the RanoreXPath to the cell containing the text “Gmail” (framed in red). Next, use the relationship operator “parent” to reference the parent node of the current element. In this example, it’s a row (framed in blue). The index “” navigates to the second cell, which contains the Gmail username (framed in green).Recognize related elements by using preceding- and following-sibling
Another way to search for related elements is to use the relationship operator “preceding-sibling”. In this example, we want to find the title of a KeePass entry based on its username.
The command “preceding-sibling::cell” lists all preceding cells. In this case, the result is the title (framed in green) which corresponds to the given username (framed in red).
In contrast, the command “following-sibling::cell” delivers all following cells. In our case, these are all following cells (framed in blue) that correspond to the given username (framed in red).
Identify attributes fields using regular expressions
You can also use regular expressions in attribute conditions to identify attribute fields. In this example, we’d like to filter cell adapters that contain an email address in their text attribute. Regular expressions matching an email address may look like this: “.+@.+\..+’”.
The “~” operator instructs Ranorex to filter attribute fields using a regular expression. The “.” in our regular expression matches every single character, while the “+” specifies that the preceding element has to occur one or more times. To escape special characters (such as “.”), enter a backlash before the character.
In our example, every expression will match that contains the character “@” with one or more characters before and after it, followed by a “.”, which is followed by one or more characters.
For more examples on how to use regular expressions in RanoreXPaths, please have a look at this user guide section: RanoreXPath with regular expression.Identify attributes with dynamic values
Dynamic attribute values change each time an element is displayed anew. Fortunately, dynamically generated content usually has a prefix or postfix. To identify dynamic elements, you can either use regular expressions, as described above, or use the ‘starts with’ or the ‘ends with’ comparison operators:
- ‘>’: The value of the attribute must start with the given string
- ‘<‘: The value of the attribute must end with the given string
The RanoreXPath enables you to find and uniquely identify every single UI element of desktop, web and mobile applications. You can use the RanoreXPath operators to make your test suite more robust and identify even dynamic attribute values.
Achieving a DevOps transformation is much easier said than done. You don’t just flip a switch and “do” DevOps. It’s also not about buying DevOps tools.
Don’t you wish you could just sit down and talk with someone who’s done it all before? You can! This week, we’re excited to share that Gary Gruver, author and Jenkins World 2016 keynote, joined us on DevOps Radio to talk about leading a DevOps transformation. So plug in your headphones, shut your office door and get comfortable: You’re going to want to hear this!
For those of you who don’t know Gary, he’s co-author of A Practical Approach to Large-Scale Agile Development, a book in which he documents how HP revolutionized software development while he was there, as director of the LaserJet firmware development lab. He’s also the author of Leading the Transformation, an executive guide to transforming software development processes in large organizations. His impressive experience doesn’t stop at author and director at HP, though. As Macys.com’s VP of quality engineering, release and operations, he led the retailer’s transition to continuous delivery.
In this episode of DevOps Radio, Gary and DevOps Radio host Andre Pino dive into the topics covered in Gary’s two books. They talk through the reality of leading a transformation, discussing practical steps that Gary took. They also bring up challenges — ones that Gary faced, and ones that you might, too.
So, what are you waiting for?! Tune in to the latest episode of DevOps Radio. It’s available now on the CloudBees website and on iTunes. Join the conversation on Twitter by tweeting out to @CloudBees and including #DevOpsRadio in your post!
Blog Categories: JenkinsCompany News
Perfecto Mobile, the world’s leader in mobile app quality, provides a hybrid cloud-based Continuous Quality Lab that enables mobile app development and testing teams to deliver better apps faster. The Continuous Quality Lab supports testing processes earlier and more often in the development cycle, giving way to faster feedback and improved time to market.1. Tell us about yourself. Please share a couple of interesting things not everyone knows about you.
I started to work in the hardware and software industry, then moved to Sun Microsystems as a senior QA manager for seven years. After that I managed verification and validation activities at NeuStar and General Electric. My next position was that of chief technology officer (CTO) at Matrix. And eventually I moved to Perfecto where I am working as a technical evangelist especially in the mobile field, but also in other areas such as web testing.
Mobile testing is one of the most interesting areas of focus for me, and I am constantly blogging about it in personal Blog. I’ve recently moved from Israel to Boston with my family and 2 cats and am enjoying the life in the U.S. You can talk to me on Twitter at @ek121268.What does an Evangelist do for a living?
Actually a lot of things, but from a macro view I engage and track the market. In my specific case I track the mobile space (thanks to my previous experience I am very familiar with mobile trends) and apply these trends to quality practices.
I also speak frequently at different events, usually about mobile and web quality, on behalf of Perfecto. I contribute a lot of white papers and blogs, host webinars, and spend a lot of time working on product strategy. It’s always changing and always challenging, which is why I love it.Two interesting things…
I have a twin brother, Lior, who not only looks just like me but he also works in the Quality Assurance industry. So I have a replacement if needed
The second thing is that I hold a patent, registered in the US under my name, currently working to apply this patent at Perfecto on the new digital space of Mobile and Web.2. Can you tell us a little about your company and the Products that you are creating? Tell us more about the “DNA” of Perfecto. What makes your company the great place to work in?
Perfecto serves enterprise customers all around the world, mostly in the US. We have customers from many Fortune 500 companies who have huge demands and expect to get high quality services, so they can in turn deliver seamless experiences to their customers across the web and mobile channels.
The solution enables enterprises to create high-quality apps and sites, and is comprised of two components:
Firstly, a cloud platform containing thousands of real devices (phones, tablets, smart watches, etc.) connected to real live networks and desktop browsers for testing apps and web sites.
Secondly, a variety of testing tools allow users to perform manual and automated tests, as well as performance testing and monitoring, on web, responsive web and mobile apps.
Combined, customers are able to deliver the highest-quality experiences across digital channels, delighting customers and driving business.
Another major piece that sets us apart is our integration portfolio. Perfecto recognizes a major shift to open source tooling, and offers both a RESTful API and integrations with leading open source tools such as Selenium, Appium, Espresso, Calabash, Cucumber and Jenkins. In addition, Perfecto integrates with tools and IDEs from Microsoft, HP, IBM and CA.
As for our history, Perfecto has been in the field of Mobile and Web quality for years, and is still innovating and moving fast. (In fact, Perfecto was just named a Leader in Forrester’s Wave for Front-End Mobile Testing Tools, 2016). So working for a recognized leader in the industry has definitely made things here exciting. And since we practice Agile in development, we’re able to adapt and innovate quickly, keeping up with changes in the digital market.
In fact, if you compare Perfecto’s innovations with other vendors, you will see we release much faster than others. Our belief is that web and mobile are not only dynamic, but fast paced industries, so we must perform this way as well. I’ve found that if an organization is not following the web and mobile trends, it quickly becomes irrelevant to the market and does not deliver the product according to customers’ requirements.3. How do you see the testing and development ecosystems evolving in 3 or 5 years from now?
I see that test teams are becoming feature teams; it is a movement happening within the last 1-2 years. The importance of quality is at an all-time high as digital experiences are working their way into our lives like we could only have imagined years ago (smart homes, smart cars, reliance on mobile devices). With more mature teams, I’m seeing dev teams take on more responsibly when it comes to quality in order to address shortened release cycles and higher quality demands. This also imposes a challenge (and opportunity) for testers to grow their technical skills and experience newer tools such as open-source and others and transform into DevTest roles.
The easiest way to be first to market with many digital cases is to reduce velocity but not to harm the quality of a website or app. What I see for the next 1-2 years is that while Agile is still growing, it is likely to become the de-facto approach for the digital development life cycle. Developers will choose more open source tools, as they tend to meet their needs of improving velocity without impacting quality.
Also in the app space, everything will be much more connected. Everything will tie back seamlessly together as user experience, technology and processes are more user experience and innovation focused.
Most Application Performance Management (APM) tools became really good at detecting failing or very slow transaction, and giving you system monitoring and code level metrics to identify root cause within minutes. While Dynatrace AppMon & UEM has been doing this for years, we took it to the next level by automatically detecting common architectural, performance […]
The post Automated Optimization with Dynatrace AppMon & UEM appeared first on about:performance.
At FlawCheck, we’re really excited about presenting to the Jenkins community at the upcoming Jenkins World 2016 in Santa Clara! FlawCheck will be presenting on “Secure Container Development Pipelines with Jenkins” in Exhibit Hall C, on Day 2 (September 14) from 2:00 PM - 2:45 PM. At FlawCheck, most of our time is spent with customers who are using Jenkins to build Docker containers, but are concerned about the security risks. FlawCheck’s enterprise customers want to use enterprise policies to define which containers, they are building with Jenkins, reach production and then continuously monitor them for compliance.
Building security into the software development lifecycle is already difficult for large enterprises following a waterfall development process. With Docker, particularly in continuous integration and continuous deployment environments, the challenge is even more difficult. Yet, for enterprises to do continuous deployment, security needs to be coupled with the build and release process and the process needs to be fully automated, scalable and reliable.
If you’re interested in container security and security of open source software passing through Jenkins environments, we’d encourage you to grab a seat at the FlawCheck talk, “Secure Container Development Pipelines with Jenkins” in Exhibit Hall C, on Day 2 (September 14) from 2:00 PM - 2:45 PM. In the meantime, follow us on Twitter @FlawCheck and register for a free account at https://registry.flawcheck.com/register.
Founder and CEO
This is a guest post written by Jenkins World 2016 speaker Anthony Bettini. Leading up to the event, there will be many more blog posts from speakers giving you a sneak peak of their upcoming presentations. Like what you see? Register for Jenkins World! For 20% off, use the code JWHINMAN
Blog Categories: Jenkins
Ashley Hunsberger, Greg Sypolt and Chris Riley contributed to this post.
Bringing test automation into your organization is not as easy as writing and running a Selenium script. It involves first getting buy-in, building a team, establishing a strategy, and picking the right tools. During the Q&A portion of a recent webinar hosted by Chris Riley, Ashley Hunsberger, and Greg Sypolt, the presenters realized that these aspects of introducing test automation are well known, but not well understood. In our first post of the series we discussed getting buy-in. Below, in the second post, we discuss how to build a test automation strategy.
Getting started with test automation is easy. If you have a technically minded QA team, you can usually create your test script, sign up for a test cloud, and run the script in just a few hours. But keeping a test automation environment going for the long term is not as easy as any of us would like to believe. QA teams are generally better at building strategy than any other. And when it comes time to build a test automation environment, strategy is a key first element to both getting started and keeping it going.
When building a strategy, you have to address how the environment works, how the tests are run, how the test suite is maintained, the process of running tests, the design patterns of the test scripts, and more. Let’s look at questions from the webinar to address some ways to approach your test automation strategy.
Would you define what “traditional QA” is and what QA “was”?
Ashley: When I say traditional QA, I’m referring to waterfall. Teams got their requirements, engineers did their work, QA went off and did theirs, but couldn’t really do anything until dev was ‘complete’—and I use that term loosely. Devs wouldn’t know there was a bug in their code until sometimes weeks or months later because of testing cycles.
Greg: Traditional processes are designed for getting requirements, and developers and QA work in their silos. There is rarely any communication between developers and QA during the sprint. At the end of the sprint, developers will throw finished code over the wall at QA. This approach doesn’t allow for collaboration, and it leads to slow feedback, and no iterations during the sprint. Iterations are a key element to modern dev.
Can you define a list of essential skills every modern QA team member should have?
Ashley: I still maintain that a QA mindset, whether via automation or not, is incredibly valuable in determining a holistic test strategy. You always need to be able to consider the end user, how the system works at a broader level than your team’s feature du jour, and what other types of testing to consider (not just automated tests, but accessibility, localization, performance, usability, security, exploratory). The key is how to incorporate that into your sprint and really push to the definition of done.
Greg: For us, it’s not a choice. We do not have dedicated DevOps resources as a Platform as a Solution (PaaS), so we are experimenting with the modern QA team member that can have both QA and DevOps responsibilities. QA is the gatekeeper of quality and it makes sense they maintain and champion the continuous integration tooling.
How do we write more tests faster?
Greg: Why do you need to write more tests? Everyone shares responsibility and follows DoD. Focus on building the right types of tests. It’s more about building a testing portfolio that identifies the areas of the application that would be a showstopper for end users, or that affect how your application makes money (at Gannett || USA Today, network ads testing is critical).
Ashley: I completely agree with Greg. Make sure you are writing the right tests instead of writing something just because you can. At Blackboard, we identify the most critical features and workflows and automate those, with as much unit and integration testing as possible, and some UI integration tests for our showstopper workflows.
Many times I feel like the DevOps Infrastructure problems have to be solved before I can do test automation. Is that true?
QA, Dev, and DevOps should share this task. It isn’t necessary for local development, but when it comes to scale and Continuous Integration (CI), it must be done up front (infrastructure first, not infrastructure last).
Greg: Read this post regarding the shared responsibilities. The team shares responsibilities for DevOps tasks. No one has been a champion for DevOps tasks. The modern QA position has become a technical role, the gatekeeper of quality, and may take on more DevOps responsibilities and tasks.
Ashley: Every company is different. At ours, QA does work closely with our DevOps team more and more as we transition to a modern delivery chain. We definitely still have our kinks, but we are still doing test automation. We are still working to be in the CI pipeline, but it doesn’t prohibit meaningful tests.
There is no question that QA is evolving. More tech, more strategy. But this is a fantastic opportunity for QA to become a first-class citizen. Given QA teams’ holistic view of the entire delivery chain (compared to IT Ops and developers consistently in the weeds), QA is best suited to build a strategy with DevOps to modernize development operations.
On October 28th and 29th, GTAC 2014, the eighth GTAC (Google Test Automation Conference), was held at the beautiful Google Kirkland office. The conference was completely packed with presenters and attendees from all over the world (Argentina, Australia, Canada, China, many European countries, India, Israel, Korea, New Zealand, Puerto Rico, Russia, Taiwan, and many US states), bringing with them a huge diversity of experiences.
Speakers from numerous companies and universities (Adobe, American Express, Comcast, Dropbox, Facebook, FINRA, Google, HP, Medidata Solutions, Mozilla, Netflix, Orange, and University of Waterloo) spoke on a variety of interesting and cutting edge test automation topics.
All of the slides and video recordings are now available on the GTAC site. Photos will be available soon as well.
This was our most popular GTAC to date, with over 1,500 applicants and almost 200 of those for speaking. About 250 people filled our venue to capacity, and the live stream had a peak of about 400 concurrent viewers with 4,700 playbacks during the event. And, there was plenty of interesting Twitter and Google+ activity during the event.
Our goal in hosting GTAC is to make the conference highly relevant and useful for, not only attendees, but the larger test engineering community as a whole. Our post-conference survey shows that we are close to achieving that goal:
If you have any suggestions on how we can improve, please comment on this post.
Thank you to all the speakers, attendees, and online viewers who made this a special event once again. To receive announcements about the next GTAC, subscribe to the Google Testing Blog.