Skip to content

Feed aggregator

Possible Jenkins Project Infrastructure Compromise

Last week, the infrastructure team identified the potential compromise of a key infrastructure machine. This compromise could have taken advantage of, what could be categorized as, an attempt to target contributors with elevated access. Unfortunately, when facing the uncertainty of a potential compromise, the safest option is to treat it as if it were an actual incident, and react accordingly. The machine in question had access to binaries published to our primary and secondary mirrors, and to contributor account information. Since this machine is not the source of truth for Jenkins binaries, we verified that the files distributed to Jenkins users: plugins, packages, etc, were not tampered with. We...
Categories: Open Source

Pipeline 2.x plugins

Those of you who routinely apply all plugin updates may already have noticed that the version numbers of the plugins in the Pipeline suite have switched to a 2.x scheme. Besides aligning better with the upcoming Jenkins 2.0 core release, the plugins are now being released with independent lifecycles. “Pipeline 1.15” (the last in the 1.x line) included simultaneous releases of a dozen or so plugins with the 1.15 version number (and 1.15+ dependencies on each other). All these plugins were built out of a single workflow-plugin repository. While that was convenient in the early days for prototyping wide-ranging changes, it...
Categories: Open Source

Fake Backends Testing with RpcReplay

Testing TV - Thu, 04/21/2016 - 20:46
Keeping tests fast and stable is critically important. This is hard when servers depend on many backends. Developers must choose between long and flaky tests, or writing and maintaining fake implementations. Instead, tests can be run using recorded traffic from these backends. This provides the best of both worlds, allowing developers to test quickly against […]
Categories: Blogs

Regression Testing with Diffy

Software Testing Magazine - Thu, 04/21/2016 - 20:22
Your team has just finished a major refactor of a service and all your unit and integration tests pass. Nice work! But you are not done just yet. Now you need to make extra sure that you didn’t break anything and that there are not any lurking bugs that you have not caught yet. It’s time to put Diffy to work. Diffy is an open source tool developed by Twitter to find potential bugs in software services. Unlike tools that ensure that your code is sound, like unit or integration tests, Diffy compares the behavior of your modified service by standing up instances of your new service and your old service side by side, routing example requests to each, comparing the responses and provides back any regressions that have surfaced from those comparisons. Video producer: https://developers.google.com/google-test-automation-conference/
Categories: Communities

Closing Gap between EUE, Application & Infrastructure Performance

FixStream Meridian with DC RUM I’m excited to co-author this blog with Bishnu Nayak from our new partner FixStream. I’ll write the Dynatrace perspective to set the stage, then let Bishnu add more details on how FixStream’s Meridian offering extends the definition and implementation of comprehensive performance visibility. At Dynatrace, we – along with our […]

The post Closing Gap between EUE, Application & Infrastructure Performance appeared first on about:performance.

Categories: Companies

What I Learned Pairing on a Workshop

Agile Testing with Lisa Crispin - Thu, 04/21/2016 - 18:20

I pair on all my conference sessions. It’s more fun, participants get a better learning opportunity, and if my pairs are less experienced at presenting, they get to practice their skills. Big bonus: I learn a lot too!

I’ve paired with quite a few awesome people. Janet Gregory and I have, of course, been pairing for many years. In addition, I’ve paired during the past few years with Emma Armstrong, Abby Bangser, and Amitai Schlair, among others. I’ve picked up many good skills, habits, ideas and insights from all of them!

The Ministry of Testing published my article on what I learned pairing with Abby at a TestBash workshop about how distributed teams can build quality into their product. If you’d like to hone your own presenting and facilitating skills, consider pairing with someone to propose and present a conference session. It’s a great way to learn! And if you want to pair with me in 2017, let me know!

The post What I Learned Pairing on a Workshop appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

What’s New in Surround SCM 2016

The Seapine View - Thu, 04/21/2016 - 14:30

Surround SCM 2016 is now available and it includes some great new features you don’t want to miss out on. Here’s a quick look at what’s new.

Capture electronic signatures and run audit trail reports for compliance purposes

To meet Title 21 CFR Part 11 compliance requirements, you can enable electronic signatures for workflow states to track when files move to the state and who signed off on the files in that state. Signature records are stored in the mainline database, and you can run audit trail reports to validate and review the records during an audit or for other compliance purposes.

Perform the following tasks to capture electronic signatures and create audit trail reports.

1. Configure workflow states that require an electronic signature. You can add new states or edit existing ones to change the signature requirements. More info

scm2016AddSigRequiredState

2. Configure compliance server options to specify the electronic signature settings, such as the signature components and the certification message to include with signatures. More info

scm2016ComplianceOptions

3. Users responsible for signing off on files set states on files and enter their electronic signature. More info

scm2016EnterSignature

4. Create and run audit trail reports to review electronic signature information, such as when files entered specific states and the users who entered signatures. More info

scm2016AuditTrailReport

Reset files to default workflow states when versions change

Administrative users can configure workflow states to automatically reset to the default state if the file version changes. This helps ensure files are not left in an incorrect state when they are updated. For example, you can configure the Reviewed state to reset to the default state to automatically restart a review workflow when additional changes are checked in. More info

Run reports from the Source Tree window

There are two new ways to run reports on files from the Source Tree window.

You can quickly create and run one-time use reports from the Tools > Quick Reports menu. You no longer need to open the Reports dialog box first to create reports you don’t need to save. More info

You can also create custom shortcut menu items to run saved reports when you right-click repositories. Before you can run reports this way, you need to create a plug-in for the menu item and add it to the repository shortcut menu. More info

scm2016CustomReportShortcutMenuItem

View syntax highlighting in the built-in file viewer and editor

Enable syntax highlighting to show specific text in different styles and colors in the built-in file viewer and editor. This makes file contents easier to read when you view and edit files, perform code reviews, and search for text in files. More info

scm2016EditFileSyntaxHighlighting

Learn more about Surround SCM 2016

Check out the release notes and help to learn more about Surround SCM 2016.

Categories: Companies

Cambridge Lean Coffee

Hiccupps - James Thomas - Thu, 04/21/2016 - 08:24

We hosted this month's Lean Coffee at Linguamatics. Here's some brief, aggregated comments on topics covered by the group I was in.

It is fair to refer to testers as "QA"?
  • One test manager talked about how he has renamed his test team as the QA team
  • He has found that it has changed how his team are regarded by their peers (in a positive way).
  • Interestingly, he doesn't call it "Quality Assurance" just "QA" 
  • His team have bought into it.
  • Role titles are a lot about perception: one colleague told him that "QA" feels more like "BA".
  • Another suggestion that "QA" could be "Quality Assisting"
  • We covered the angle that (traditional) QA is more about process and compliance than what most of us generally call testing.
  • We didn't discuss the fairness aspect of the original question.

What books have you read recently that contributed something to your testing?
  • The Linguamatics test team has a reading group for Perfect Software going on at the moment.
  • Although I've read the book several times, I always find a new perspective on some aspect of something when I dip into it. This time around it's been meta testing.
  • The book reinforces the message that a lot of testing (and work around the actual testing) is psychology.
  • But also that there is no simple recipe to apply in any situation.
  • We discussed police procedural novels and how the investigation, hypotheses, data gathering in them might be related to our day job

When should we not look at customer bugs?
  • When your product is a platform for your customers to run on, you may find bugs in customer products when testing yours.
  • How far should you go when you find a bug in customer code? 
  • Should you carry on investigating even after you've reported it to them?
  • In the end we boiled this question down to: as a problem-solver, how do you leave an unresolved issue alone?
  • Suggestions: time-box, remember that your interests are not necessarily the company priorities, automate (when you think you need lots of attempts to find a rare case), take the stakeholder's guidance, brainstorm with others, ... 
  • If the customer is still screaming, you should still be working. (An interesting metric.)

Image: https://flic.kr/p/cLViad
Categories: Blogs

Making your own DSL with plugins, written in Pipeline script

In this post I will show how you can make your own DSL extensions and distribute them as a plugin, using Pipeline Script. A quick refresher Pipeline has a well kept secret: the ability to add your own DSL elements. Pipeline is itself a DSL, but you can extend it. There are 2 main reasons I can think you may want to do this: You want to reduce boilerplate by encapsulating common snippets/things you do in one DSL statement. You want to provide a DSL that provides a prescriptive way that your builds work - uniform across your organisations Jenkinsfiles. A DSL could look as simple as acmeBuild { ...
Categories: Open Source

How LoadRunner can act as a catalyst to accelerate the performance testing process

HP LoadRunner and Performance Center Blog - Wed, 04/20/2016 - 22:52

ADM chemistry.PNG

In chemical reactions, a catalyst makes reactions occur faster and require less activation energy. In performance testing, HPE LoadRunner is the catalyst. Keep reading to find out how to maximize its capabilities.

Categories: Companies

Analyzing JMeter Application Performance Results

JMeter is a very popular open source load testing tool with great flexibility thanks to its Java-based extension points. What it lacks is the ability to analyze the results in combination with metrics from your application and your infrastructure. As mentioned in a recent PerfBytes podcast, because JMeter itself doesn’t provide good data visualization, most users stream […]

The post Analyzing JMeter Application Performance Results appeared first on about:performance.

Categories: Companies

SmartBear Simplifies Mobile API Testing

Software Testing Magazine - Tue, 04/19/2016 - 21:54
SmartBear Software has released improvements to its enterprise API readiness platform, Ready! API, to simplify API testing for mobile application providers and speed time to delivery through advanced API recording and mocking capabilities. Ready! API, an end-to-end API readiness platform, includes capabilities to record real-time traffic from mobile devices and other software, producing reusable tests and virtualized mock services from captured API interactions. Recording from real-time traffic simplifies and accelerates the process of creating testing structures, saving developers and QA engineers valuable time in the software development and testing cycles. With updates to Ready! API 1.7, developers and QA engineers can: * Easily filter traffic to and from APIs based on content type, host or HTTP verb properties * Simulate previously captured traffic as automated regression tests * Generate dynamic virtual APIs for other teams and partners
Categories: Communities

Cross-Browser Test Automation with Ranorex

Ranorex - Tue, 04/19/2016 - 20:00

If you’re testing a web application, it would naturally be best to test it with not only one but with all of the most popular browsers (cross-browser testing).

This blog post will show you how to record your automated website browser tests and then automatically execute the recorded tests on different browser for browser compatibility testing. Ranorex is a cross browser testing tool which can run tests in Microsoft Internet Explorer, Mozilla Firefox, Google Chrome, Chromium, Apple Safari and Microsoft Edge.

Sample Test Suite Project


To demonstrate how to perform a multiple browser test, we will generate a small sample which enters data in our VIP Database Test Web Application

First of all we’ll create a Test Case holding two Recordings, one for opening and one for closing the browser as setup and teardown modules.

 

Ranorex Sample Project

 

Now we add a “OpenBrowser” action to the OpenBrowser Module with “http://www.ranorex.com/web-testing-examples/vip/” as Url and e.g. “IE” as browser.

 

Open Browser Module

 

As next step we add a recording module validating the status String on connecting and disconnecting.

 

Ranorex Sample Project

 

The recording module simply

  • validates, that the status text equals “Online”,
  • disconnects,
  • validates, that the status text equals “Offline”,
  • connects again,
  • confirms to connect in the pop up window
  • and validates, that the status text equals “Online” again.

 

Test Connection

 

Make sure to have two repository items representing the connection status text, one for “Online” and one for “Offline”. This allows you to overcome issues with delaying validation steps. In our application it takes some time that the status text changes from “connecting…” to “Online”. To make the Validation work, we can simply add the actual validation into the RanoreXPath and only validate the existence of the status text in our web page. By doing so, we are using the search timeout of the repository item to wait for the status text to change.

Additionally to the TestConnection recording, we will generate a recording for adding VIP’s to the database. This recording will be added to a new Test Case as we want to data driven add VIP’s and do not want to open and close the browser and testing the connection with each iteration of adding a new VIP.

 

Ranorex Sample Project

 

The recording might look something like this:

 

Add VIP

 

As we want to make our test data driven, we have to add variables which can be bound with the data from our data source.

The key sequences for first and last name contain the variables $FirstName and $LastName. To select the category, we have to add a SetValue action and set the TagValue to the variable $Category. The gender can be set by adding a variable to the RanoreXPath of the corresponding repository item. Additionally, we validate the VIP count against a variable $VIP_Count.

After generating the recording, we create a data source for the Test Case Add_VIP’s and bind the data tables to the variables of the recording AddVIP.

 

Add Data Source to AddVIP

Add Data Source to AddVIP

Add Data Source to AddVIP 3

 

As last step we add a Close Application action to the CloseBrowser Module with the application folder of the web application as repository item.

 

Close Browser Module

 

Now we can execute our Test Suite Project, which:

  • opens the web application in Internet Explorer in the setup region,
  • performs connection tests,
  • adds 3 VIPs following the data driven approach (the data for the 3 VIPs are stored in a simple data table),
  • validates the count of the VIPs stored in the web application
  • and closes the browser in the tear down region.
Cross-Browser Test


To perform these steps not only for IE but also for the other browser which are supported, we first make the browser which will be started in the Recording “OpenBrowser” variable.

Therefore open the recording “OpenBrowser” and edit the browser which should be started. Now choose “As new Variable…” instead of “IE” and add a new Variable called BrowserName.

 

add browser variable

add browser variable 2

 

After that, add a new simple data table to the Test Case “Add_VIP_and_Validate”, holding the names of the different browsers and bind the data connector to the variable “BrowserName”.

 

Add Data Source

Add Data Source 2

Add Data Source 3

 

After making the browser variable that way and binding this variable to a table holding all supported browser names, you can execute your test script for all supported browser.

 

Reporting

The post Cross-Browser Test Automation with Ranorex appeared first on Ranorex Blog.

Categories: Companies

Impressions from DevOpsDays Vancouver 2016

Sonatype Blog - Tue, 04/19/2016 - 18:57
DevOpsDays are always a great event for a geek to attend. You get to chat to fellow hackers and coders and therefore people, who actually understand what you are talking about. The vibe that results from these conversations is always amazing. Presenting is definitely a challenge, but great if you...

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

Testlio Gets $6.25 Million Financing

Software Testing Magazine - Tue, 04/19/2016 - 17:45
Testlio announced it has closed $6.25 million in Series A funding led by Silicon Valley-based Altos Ventures and Vertex Ventures. Testlio is a global testing service provider that helps development teams enhance their quality assurance function. Already profitable, Testlio grew 500% in 2015 and will invest the new funding to build out infrastructure in San Francisco and Tallinn, Estonia to support its global customers. “Today, we partner closely with some of the most demanding companies in the world.” said Kristel Viidik, CEO of Testlio. “In the future, our goal is to change the way companies think about their testing processes as a whole. With mobile applications, every brand is able to directly reach their consumers with a high-fidelity experience. Quality assurance plays a critical role in ensuring the best experience possible, and these companies are in need of a trusted partner to help deliver on this mission. We are expanding our service beyond mobile app testing so that we can continue to meet the changing needs of our customers now and in the future.”
Categories: Communities

Mobile vs. Web: Which is Harder to Test?

Sauce Labs - Tue, 04/19/2016 - 15:00

Have you ever worked on a web-based test team and switched to a mobile team and wondered if your life is about to get easier or harder? There are significant differences between testing mobile vs. web, and yes, one is MUCH harder than the other. Want to guess which one? Read on and see if you guessed correctly.

Let’s Compare

The table below shows the different facets of testing and where its execution is most challenging.

ScenarioWeb is HarderMobile is HarderIt's a Draw Feature Functionality Testing
For the case where you have a web application with a supporting mobile app, it is likely the app will have a subset of features that the web side does. When first developing a feature, it is new to the web development team. They have to go through the process of designing and building out the new feature, and training their team on the concept.

When the supporting app is created, it would most likely rely on the existing web feature set, so the design and concepts should be easier for the team to comprehend.X Feature Parity
Speaking of features, when you are building a web app you only have to account for the features in one user world, such as the browser. The feature only needs to be developed once. When you create features on a mobile app, you now have to take into account not only each device platform’s capabilities, but even the user communities’ expectations.

Apple and Droid users have different expectations for how their apps will work. And if you want to have feature parity in your apps, than the development process management becomes all the more difficult. From a test perspective you now have double the work.X
(mobile is harder) Deploying the Application
When using CI, your test environments can constantly be updated for your use with the latest builds. You can also do this on the mobile side if you are using simulators and emulators, or a device cloud such as the one provided by Sauce Labs.

But what about if you want to test on live, local devices? Most likely your team is using them throughout the day and they are not connected to any CI environment. You have to manually install the apps when you want to test the latest, which can be time-consuming, especially when you have multiple devices and platforms.X
(mobile is harder) Integration Tests
Testing needs to account for backend services. Both web and mobile development teams need to run integration tests against their respective APIs. The time and effort to accomplish this are similar.X
(it's a draw) Automating with Page Object Locators
When writing automation on a web application, you need to find the page object locators. You only need to write code to support one set of locators.

If you are developing automated tests for mobile apps, you now need to work with two different dev teams to determine the locators. (note, this is an ideal place to have cross-team standards to build consistency between the apps). And as stated previously, the features might not be in sync, causing testers to write multiple tests for similar features.X
(mobile is harder) Tool and Support Community
Web automation is a very mature industry. Not only are the tools in place to support most automation interactions, but the user community is available to answer any question for any level of user. While the mobile world of automation has come a long way, it still doesn’t have the amount of support that web does.X
(mobile is harder) Platform Complexities
When talking about the test infrastructure, mobile is much more fragile. The different deployment rules between Apple and Droid can make for some tricky problems to solve. Not only that, but the multiple-platform issue comes into play again because now you are supporting two sets of infrastructure that can push updates at any time, causing your tests to unexpectedly break.X
(mobile is harder) Test Strategy
When you think of testing on the Web, your strategy usually takes into account the different supported browsers and maybe the underlying operating systems (OS). With mobile, you have to take into account the OS versions for each platform, as well as device types. While Apple is pretty stable and their user community is up-to-date on the OS, Droid can have a ton of different configurations that your user community supports.X
(mobile is harder) Did You Guess Right?

When I first posed the mobile vs. web question to an automation architect who has worked in both arenas, he immediately replied “mobile is 110% harder.” I have to agree.

But that is not bad news. First, mobile tooling is still evolving, and does not have the maturity that web tooling does. Secondly, there is already plenty available of resources that can help, such as real-device clouds and more and more robust testing frameworks like Appium and Robotium. While you can’t do much to change the nature of app stores, testing and CI tools for mobile are only getting better.

Joe Nolan (@JoeSolobx) is a Mobile QA Team Manager with over 12 years of experience leading multi-nationally located QA teams, and is the founder of the DC Software QA and Testing Meetup.

Categories: Companies

Live Sneak Peek: What’s New in TestTrack 2016

The Seapine View - Tue, 04/19/2016 - 12:30

webinarTestTrack 2016 is coming soon—and we’re offering a live, 30-minute
What’s New in TestTrack 2016 sneak peek on Wednesday, April 27!

Join Gordon Alexander, Seapine Solutions Consultant, for a preview and live demo of the key new features, including how to:

  • Export Items to Word. You can now export all TestTrack items to Word! The completely rewritten export functionality makes it easy to produce documents that include the exact data you need, in the exact Word format you want.
  • View and Navigate to Linked Items. To quickly see links between items, you can now add columns with linked item information to list windows. Want to see more? Click a linked item and go straight to it to see more details.
  • Use Note Widgets with Dashboards. Want to easily share getting started information with new team members, provide links to important information for a sprint, or provide team contact information? Put a note, which can be seen by everyone, right on a dashboard.
Register Now

Can’t make it to the live event? Not to worry—register and we’ll email you the recording as soon as it’s available.

Categories: Companies

Romanian Testing Conference, Cluj Napoca, Romania, May 19-20 2016

Software Testing Magazine - Tue, 04/19/2016 - 11:00
The Romanian Testing Conference is a two-day conference focused on software testing. It provides a rich content program in a comfortable and well structured manner delivered by professional speakers in software testing industry. In the agenda of the Romanian Testing Conference you can find topics like “How to become a testing toolsmith”, “Testing in the age of complexity”, “Test Flow: Improve yourself with mental training”, “Does Agile still need testers?”, “Pair testing in an agile world”, “Automation – the good, the bad, the ugly”, “Inspiring Testing – Test Leadership”, “An Intimate and Up Close View Into a Tester’s Career”, “A Tester’s Journey: From Germany to Silicon Valley”, “Analyzing statement coverage at Google”, “Cloud Performance – What Really Matters”, “Twitter and the context-driven testing community”, “How to build a unified automation framework for web and Mobile Applications”. Web site: http://www.romaniatesting.ro/ Location for the Romanian Testing Conference conference: Grand Hotel Italia, Strada Trifoiului 2, Cluj-Napoca, Romania
Categories: Communities

Agile Delivery, London, UK, June 2 2016

Software Testing Magazine - Tue, 04/19/2016 - 10:45
Agile Delivery is a one-day event taking place in London. It targets tech and business people that focus on software development and software testing automation, Behaviour-Driven Development (BDD) and DevOps for retail, finance, government and digital sectors. The conference provides both presentations and workshops. In the agenda of Agile Delivery you can find topics like “Culture Before Tooling or Does Tooling Foster Culture?”, “Product versus Craft”, “Collaboration, Hands On BDD for Product Owners, Devs and Testers”, “Zero to Tested with Docker and Ruby”, “Agile Transformation for Cios”. Web site: http://agile.delivery/ Location for the Agile Delivery conference: Old Street, London, UK
Categories: Communities

It’s Not A Factory

DevelopSense Blog - Tue, 04/19/2016 - 06:38
One model for a software development project is the assembly line on the factory floor, where we’re making a buhzillion copies of the same thing. And it’s a lousy model. Software is developed in an architectural studio with people in it. There are drafting tables, drawing instruments, good lighting, pens and pencils and paper. And […]
Categories: Blogs

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today