Skip to content

Feed aggregator

Spotlighting Important Data in TestTrack List Windows

The Seapine View - Thu, 08/07/2014 - 23:27

We know your team has a lot of data in TestTrack and sometimes (often times?) it’s hard to wade through all of that text-based information to find what’s really important in the moment. The new Field Value Styles in TestTrack 2014.1 allow you to use colors, icons and different fonts to differentiate information and work with large amounts of data faster.

There are two components to using the field value styles. First you create a style and then you apply that style to certain types of data.

Create a Field Value Style

To get started go to Tools > Administration > Field Value Styles. If the menu option is grayed out, check Administration permissions within your security group. Click Add to create a new style. In this example I’ve created a style called Passed that does 3 things.

  1. Changes the text color to green
  2. Bolds the text
  3. Places an icon before the text

The icon I used here is installed with TestTrack in the workflowicons folder, which is in the client installation directory. You can also use your own icons as long as they’re 16×16 pixels.

CreateFieldStyle

Apply a Field Value Style

Now that you have a Field Style, you can start applying it to different fields. For this example, I’m going to apply my Passed style to the workflow status of test runs. To do that, go to Tools > Administration > Workflow and select Test Runs from the drop-down menu. Then edit the Passed state and set the Style drop-down to the Passed style. You can set the style for any workflow state on any item type, as well as any general or custom drop-down field. For general and custom fields, go to Tools > Configure List Values > Field name to assign a style to a value.

UseFieldStyle

Here’s the new field styles in action on the Test Runs list window.

ViewFieldStlye

Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

Web Performance QA Tester and Load Tester 6.3 Released

Web Performance Center Reports - Thu, 08/07/2014 - 23:06
If you were wondering why there’s a 6.3 release only a few weeks after the 6.2 release, its because we’re on a new development schedule.  Instead of holding back new features for months and only putting out new releases a couple of times a year, we’re moving to releases everyone 1-2 months, getting the new stuff and bug fixes into your hands as quickly as possible.  This fits in nicely with the new monthly subscription model for Web Performance QA Tester™, where the small monthly fee covers not just support but new features month after month.  If you … Continue reading »Related Posts:
Categories: Companies

Appium Bootcamp – Chapter 6: Run Your Tests

Sauce Labs - Thu, 08/07/2014 - 22:25

appium_logoThis is the sixth post in a series called Appium Bootcamp by noted Selenium expert Dave HaeffnerRead:  Chapter 1 Chapter 2 | Chapter 3 | Chapter 4 | Chapter 5 | Chapter 6 | Chapter 7 | Chapter 8

Dave recently immersed himself in the open source Appium project and collaborated with leading Appium contributor Matthew Edwards to bring us this material. Appium Bootcamp is for those who are brand new to mobile test automation with Appium. No familiarity with Selenium is required, although it may be useful. This is the sixth of eight posts; two new posts will be released each week.

Now that we have our tests written, refactored, and running locally it’s time to make them simple to launch by wrapping them with a command-line executor. After that, we’ll be able to easily add in the ability to run them in the cloud.

Quick Setup

appium_lib comes pre-wired with the ability to run our tests in Sauce Labs, but we’re still going to need two additional libraries to accomplish everything; rake for command-line execution, and sauce_whisk for some additional tasks not covered by appium_lib.

Let’s add these to our Gemfile and run bundle install.

# filename: Gemfile

source 'https://rubygems.org'

gem 'rspec', '~> 3.0.0'
gem 'appium_lib', '~> 4.0.0'
gem 'appium_console', '~> 1.0.1'
gem 'rake', '~> 10.3.2'
gem 'sauce_whisk', '~> 0.0.13'

Simple Rake Tasks

Now that we have our requisite libraries let’s create a new file in the project root called Rakefile and add tasks to launch our tests.

# filename: Rakefile

desc 'Run iOS tests'
task :ios do
  Dir.chdir 'ios'
  exec 'rspec'
end

desc 'Run Android tests'
task :android do
  Dir.chdir 'android'
  exec 'rspec'
end

Notice that the syntax in this file reads a lot like Ruby — that’s because it is (along with some Rake specific syntax). For a primer on Rake, read this.

In this file we’ve created two tasks. One to run our iOS tests, and another for the Android tests. Each task changes directories into the correct device folder (e.g., Dir.chdir) and then launches the tests (e.g., exec 'rspec').

If we save this file and run rake -T from the command-line, we will see these tasks listed along with their descriptions.

> rake -T
rake android  # Run Android tests
rake ios      # Run iOS tests

If we run either of these tasks (e.g., rake android or rake ios), they will execute the tests locally for each of the devices.

Running Your Tests In Sauce

As I mentioned before, appium_lib comes with the ability to run Appium tests in Sauce Labs. We just need to specify a Sauce account username and access key. To obtain an access key, you first need to have an account (if you don’t have one you can create a free trial one here). After that, log into the account and go to the bottom left of your dashboard; your access key will be listed there.

We’ll also need to make our apps available to Sauce. This can be accomplished by either uploading the app to Sauce, or, making the app available from a publicly available URL. The prior approach is easy enough to accomplish with the help of sauce_whisk.

Let’s go ahead and update our spec_helper.rb to add in this new upload capability (along with a couple of other bits).

# filename: common/spec_helper.rb

require 'rspec'
require 'appium_lib'
require 'sauce_whisk'

def using_sauce
  user = ENV['SAUCE_USERNAME']
  key  = ENV['SAUCE_ACCESS_KEY']
  user && !user.empty? && key && !key.empty?
end

def upload_app
  storage = SauceWhisk::Storage.new
  app = @caps[:caps][:app]
  storage.upload app

  @caps[:caps][:app] = "sauce-storage:#{File.basename(app)}"
end

def setup_driver
  return if $driver
  @caps = Appium.load_appium_txt file: File.join(Dir.pwd, 'appium.txt')
  if using_sauce
    upload_app
    @caps[:caps].delete :avd # re: https://github.com/appium/ruby_lib/issues/241
  end
  Appium::Driver.new @caps
end

def promote_methods
  Appium.promote_singleton_appium_methods Pages
  Appium.promote_appium_methods RSpec::Core::ExampleGroup
end

setup_driver
promote_methods

RSpec.configure do |config|

  config.before(:each) do
    $driver.start_driver
  end

  config.after(:each) do
    driver_quit
  end

end

Near the top of the file we pull in sauce_whisk. We then add in a couple of helper methods (using_sauce and upload_app). using_sauce checks to see if Sauce credentials have been set properly. upload_app uploads the application from local disk and then updates the capabilities to reference the path to the app on Sauce’s storage.

We put these to use in setup_driver by wrapping them in a conditional to see if we are using Sauce. If so, we upload the app. We’re also removing the avd capability since it will cause issues with our Sauce run if we keep it in.

Next we’ll need to update our appium.txt files so they’ll play nice with Sauce.

 

# filename: android/appium.txt

[caps]
appium-version = "1.2.0"
deviceName = "Android"
platformName = "Android"
platformVersion = "4.3"
app = "../../../apps/api.apk"
avd = "training"

[appium_lib]
require = ["./spec/requires.rb"]
# filename: ios/appium.txt

[caps]
appium-version = "1.2.0"
deviceName = "iPhone Simulator"
platformName = "ios"
platformVersion = "7.1"
app = "../../../apps/UICatalog.app.zip"

[appium_lib]
require = ["./spec/requires.rb"]

In order to work with Sauce we need to specify the appium-version and the platformVersion. Everything else stays the same. You can see a full list of Sauce’s supported platforms and configuration options here.

Now let’s update our Rake tasks to be cloud aware. That way we can specify at run time whether to run things locally or in Sauce.

desc 'Run iOS tests'
task :ios, :location do |t, args|
  location_helper args[:location]
  Dir.chdir 'ios'
  exec 'rspec'
end

desc 'Run Android tests'
task :android, :location do |t, args|
  location_helper args[:location]
  Dir.chdir 'android'
  exec 'rspec'
end

def location_helper(location)
  if location != 'sauce'
    ENV['SAUCE_USERNAME'], ENV['SAUCE_ACCESS_KEY'] = nil, nil
  end
end

We’ve updated our Rake tasks so they can take an argument for the location. We then use this argument value and pass it to location_helper. The location_helper looks at the location value — if it is not set to 'sauce'then the Sauce credentials get set to nil. This helps us ensure that we really do want to run our tests on Sauce (e.g., we have to specify both the Sauce credentials AND the location).

Now we can launch our tests locally just like before (e.g., rake ios) or in Sauce by specifying it as a location (e.g., rake ios['sauce'])

But in order for the tests to fire in Sauce Labs, we need to specify our credentials somehow. We’ve opted to keep them out of our Rakefile (and our test code) so that we can maintain future flexibility by not having them hard-coded; which is also more secure since we won’t be committing them to our repository.

Specifying Sauce Credentials

There are a few ways we can go about specifying our credentials.

Specify them at run-time

SAUCE_USERNAME=your-username SAUCE_ACCESS_KEY=your-access-key rake ios['sauce']

Export the values into the current command-line session

export SAUCE_USERNAME=your-username
export SAUCE_ACCESS_KEY=your-access-key

Set the values in your bash profile (recommended)

# filename: ~/*.bash_profile

...
export SAUCE_USERNAME=your-username
export SAUCE_ACCESS_KEY=your-access-key

After choosing a method for specifying your credentials, run your tests with one of the Rake task and specify 'sauce' for the location. Then log into your Sauce Account to see the test results and a video of the execution.

Making Your Sauce Runs Descriptive

It’s great that our tests are now running in Sauce. But it’s tough to sift through the test results since the name and test status are nondescript and all the same. Let’s fix that.

Fortunately, we can dynamically set the Sauce Labs job name and test status in our test code. We just need to provide this information before and after our test runs. To do that we’ll need to update the RSpec configuration incommon/spec_helper.rb.

 

# filename: common/spec_helper.rb

...
RSpec.configure do |config|

  config.before(:each) do |example|
    $driver.caps[:name] = example.metadata[:full_description] if using_sauce
    $driver.start_driver
  end

  config.after(:each) do |example|
    if using_sauce
      SauceWhisk::Jobs.change_status $driver.driver.session_id, example.exception.nil?
    end
    driver_quit
  end

end

In before(:each) we update the name attribute of our capabilities (e.g., caps[:name]) with the name of the test. We get this name by tapping into the test’s metadata (e.g., example.metadata[:full_description]). And since we only want this to run if we’re using Sauce we wrap it in a conditional.

In after(:each) we leverage sauce_whisk to set the job status based on the test result, which we get by checking to see if any exceptions were raised. Again, we only want this to run if we’re using Sauce, so we wrap it in a conditional too.

Now if we run our tests in Sauce we will see them execute with the correct name and job status.

Outro

Now that we have local and cloud execution covered, it’s time to automate our test runs by plugging them into a Continuous Integration (CI) server.

Read:  Chapter 1 Chapter 2 | Chapter 3 | Chapter 4 | Chapter 5 | Chapter 6 | Chapter 7 | Chapter 8

About Dave Haeffner: Dave is a recent Appium convert and the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by thousands of testing professionals) as well as The Selenium Guidebook (a step-by-step guide on how to use Selenium Successfully). He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.

Follow Dave on Twitter - @tourdedave

Categories: Companies

Sheridan Hindle Co-operates to replicate IT benefits

Original Software and Sheridan Hindle, CIO of Midcounties Co-op spoke with Computer Weekly editor Cliff Saran, to talk about the quality assurance work they are doing together, particularly on iterative development. In particular, When Midcounties Co-operative decided to upgrade its payroll system, it took the opportunity to move from a waterfall to an agile methodology. […]
Categories: Companies

Amadeus Contribution to the Jenkins Literate Plugin and the Plugin's Value

This is one in a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Valentina Armenise, solutions architect, CloudBees, about a presentation called "Going Literate in Amadeus" given by Vincent Latombe, Amadeus at JUC Berlin.
The Literate plugin is built on top of the Literate programming concept, introduced by Donald Knuth, who introduced the idea that a program can be described by natural language, such as English, rather than by a programming language. The description would be translated automatically to source code to be used in the scripts in a process completely transparent for the users.
The Literate plugin is built on top of two APIs:
  • Literate API responsible for translating the descriptive language in source code
  • BRANCH API which is the toolkit to handle multi-branch:
    • SCM API - provides the capability to interact with multiple heads of the repository
    • capability to tag some branches as untrusted and skip those
    • capability to discard builds
    • foundation for multi-branch freestyle project
    • foundation for multi-branch template project

Basically, the Literate plugin makes you able to describe your environment together with the build steps required by your job to build, in a simple file (either the marker file or the README.md). The Literate plugin queries the repository looking for one or more branches which contain the descriptive file. If more than one branch contains this file, being eligible to be built in a literate way and no specific branch is specified in the job, then the branches are built in parallel. This means that you can create multi-branch projects where each branch requires different build steps or simply different environments.
The use of the Literate plugin becomes quite interesting when you need to define templates with customizable variables or to whitelist build sections.
Amadeus has invested resources in Jenkins in order to accomplish continuous integration. Over the years they have specialized in the use of the Literate plugin in order to make the creation of jobs easier and become a contributor to this plugin.Vincent Latombe presenting his talk at JUC Berlin.
Click here to watch the video.
And click here to see the slides.
In particular, Amadeus invested resources in order to enhance the plugin usage experience by introducing the use of YAML, a descriptive language which leaves less space to errors compared to the traditional MARKDOWN -too open.
How do we see the Literate plugin today?
With the introduction of CI, there are conversations going on about what is the best approach in merging and pulling changes to repositories.
Some people support the “feature branching” approach, where each new feature is a new branch and is committed to the mainline only when ready to be released in order to provide isolation among branches and stability of the trunk.
Although this approach is criticized by many who think that it is too risky to commit the whole new feature at once, it could be the best approach when the new feature is completely isolated from the rest (a completely new module) or in open source projects where a new feature is developed without deadlines and, thus, can take quite a while to be completed.
The Literate plugin works really well with the feature branching approach described above, since it would be possible to define different build steps for each branch and, thus, for each feature.
Also, this approach gets along really well with the concept of continuous delivery, where the main idea is that the trunk has to be continuously shippable into production.
How does it integrate with CD tools?
Today, we’re moving from implementing CI to CD: Jenkins is not a tool for developers only anymore but it’s now capturing the interest of Dev-Ops.
By using plugins to implement deployment pipelines (ie. Build Pipeline plugin, Build Flow plugin, Promotion plugin), Jenkins is able to handle all the phases of the software lifecycle.
The definition of environments and agents to build and deploy to is provided with integration to Puppet and Chef. These tools can be used to describe the configuration of the environment and apply the changes on the target machines before deployment.
At the same time, virtualization technologies that allow you to create software containers, such as Docker, are getting more and more popular.
How the literate builds could take part in the CD process?
As said before, one of the things that the Literate plugin simplifies is the definition of multiple environments and of build steps by the use of a single file: the build definition will be stored in the same SCM as the job that is being built.
This means that the Literate plugin gets along really well with the infrastructure as code approach and tools like Docker or Puppet where all the necessary files are stored in the SCM. Docker, in particular, could be a good candidate to work with this plugin, since a Docker image is completely described by a single file (the Dockerfile) and it’s totally self-contained in the SCM.
What's next?
Amadeus is looking for adding new features for the plugin in the near feature:
  • Integration with GitHub, Bitbucket and stash pull request support
  • Integration with isolation features (i.e. sandbox commands within the container)

Do you want to know more?



Valentina Armenise
Solutions Architect, CloudBees

Follow Valentina on Twitter.


Categories: Companies

uTester Shares Software Testing World Cup Experience, Offers Advice

uTest - Thu, 08/07/2014 - 16:28
20140613_174308_resized

Mark and his team during the Software Testing World Cup.

Marek (Mark) Langhans is a Gold-rated tester and former Forums Moderator in the uTest Community, and hails from Prague. Mark has tested information systems, web, mobile and desktop applications of domestic financial institutions for a couple of years now. In June, he participated in the Europe Preliminary round of the Software Testing World Cup (STWC), and shares his experience here, along with some advice for future STWC participants.

Before I go into any details about the competition, let me thank the entire team and all the judges and product owners behind the Software Testing World Cup (STWC).

If testing needed a push into the general public eye, this event was the right way to go about it. Not only it has given us testers a way to compete and connect with each other, and to see our limitations, strengths and weaknesses, but it has also put the testing profession into a whole new perspective. Testing has been made cool, and that is very rare to do.

Gameday

On Friday, the 13th of June, three of my colleagues and I participated in the Europe Preliminary round of the STWC. Even though we had a pretty awesome base in our firm HQ’s basement, at the end, we didn’t end up near the top. However, with the things we have taken from it and it has given to us, you just can’t put a price tag on that. In three hours, you learn so much about yourself and your testing capabilities than you may have in your whole testing career.

The competition started on time. Thirty minutes before the official start, we all had received emails introducing us to the software to test (Sales Tool — for more details, check out the YouTube stream), the scope, and, of course, some tips what we should focus on. The email contained a link to the application so we could move around with it before the actual competition. These thirty minutes flew by, as the application was something none of us had come across before, and so we tried to figure out what we actually could test and how.

The email also contained a link to a YouTube channel which, in real-time, streamed a Google hangout of the judges, customer and product owner. On the YouTube channel, in the comments area, we could ask questions regarding the scope, the application and everything about the competition. We could also use Twitter to ask these questions. My team and I were focused more on the application, as it was an unknown to us, so we didn’t pay much attention to the stream, even though we had a huge screen just for it in front of us. We listened to it with only one ear, but that probably was a mistake, in retrospect.

The application wasn’t that complex, so there weren’t that many features, but the ones we could cover weren’t that easy to test. We found a few issues, and everything we came across was reported in the Agile Manager, the official bug reporting tool. Before the competition, we prepared a bug report template so we all had the same information contained in the reports, but we weren’t consistent…another mistake, as this was taken into consideration in judging.

There was one huge issue with the reporting tool, as other teams reported that they could see other teams’ bug reports. This made the competition a little uneven, to be honest, but I do not think it made it any difference in the end.

After the 2 1/2 hours testing the application, we moved into creating the test report. We had prepared a template for it, so we just filled in our findings. For non-English speaking teams, this was the hardest part, I guess, to make it understandable without any major grammatical mistakes. We followed a few tips from Global Jury member Matt Heusser: Keep it simple on few pages with clearly visible sections, have a summary at the top, and below, go into some details about your findings, with more details about those findings and how you came to them.

We sent out the report five minutes after 12. We immediately received an auto-reply that the report was received, and after that, we packed our stuff and went home. Of course, our heads were filled with what we had done, we could have done better and what we could have done better next time.

Advice for the Prospective STWC participant

When I look back at the whole experience and compare it to uTest, for instance, I originally thought this would be very similar in many ways, but it wasn’t.

Having three hours to test something isn’t that bad — you get to used to it when testing applications here at uTest. But with three hours, you just report bugs and that is it. Maybe you’ll fill in a review whenever you have time. But for the competition, there is a responsibility to report back to the customer your findings and recommend having the application released or postponed. And to do all this in three hours is quite different from just testing and then moving on to another project.

This contest wasn’t all about how well you test and how many valuable bugs you log, but also about communication, both within your team, and also with other teams and the team behind STWC. This was mentioned many times by Matt, and teams that were visible publicly were given additional points which may have helped few of them to get a better position at the end.

As this contest was made for testers and our profession, we should have given something small back in return — at least promoting it on social media and so on. A colleague of ours Tweeted about our team a few times, but when I look back, it wasn’t enough. It wasn’t all about the points, but about being part of something greater and helping to achieve something.

If you decide to be part of a STWC team next year, take the time and sit down with your team before the competition, and set a few strategies. We did so only once, and within the time we only came up with bug and test report templates. We didn’t cover some sort of strategy on how to go about the application under test, or at least make a list what to test for both mobile and web applications. In just three hours, there isn’t much time to be creative and try to think of something, and when you have no guidelines, you may start panicking.

Three hours is a very short time, so don’t try to test everything, the entire application and all its features. It is more about making compromises and focusing on some key areas, each team member on something different, or just working in pairs in the same area.

From what I have taken from the competition, communication within our team helped us understand the SUT very quickly, but we tried to test every feature available, and thus ended up scratching only the surface and not going deep enough. When you compromise, you have at least something to write about in your test report, where you can detail what you covered and what you would cover if you had more time.

Categories: Companies

The Perfect Code Coverage Score

NCover - Code Coverage for .NET Developers - Thu, 08/07/2014 - 13:07

ncover-code-coverageYou manage what you measure – but what if you are looking at the wrong thing? The metrics we define influence our process and end result. For example, trying to gauge your speed by the sound of your radio would lead to noise dampening in the car and volume controls on the dashboard. While these features may be an interesting experiment, it is not the main information you need to see about knowing your speed. Is volume the best metric to look at?  Probably not. Actually – no. It is definitely not. Please practice responsible driving.

Back to the topic at hand, we see this same confusion when we talk to our customers that are trying to define their code coverage and looking for the perfect score. It is very important to select the right combination of metrics to measure the effectiveness of your testing strategies and the quality of your code base to guide your development and quality efforts moving forward. But striving for the perfect 100% on single basic metric may be guiding you down the wrong path.

anders-able-code-coverageWe have talked previously about some of the best practices we have found in our years of covering code. Recently, we came across a post by Anders Abel discussing some of the same things we see everyday. He discusses the difference between line coverage and functional coverage. He even shows some pretty strong examples on how bad code can sneak through line coverage tests.

Our quest for what seems like a good measure – 100% seems pretty perfect – may not be telling us the whole picture. Code coverage metrics, like branch coverage, sequence point coverage and change-risk-anti-patterns score, help you and your team build quality code and let’s you know that it is good. There is no one perfect score. Each team is different. The important piece is setting the foundation for developing meaningful metrics that influence your code in meaningful ways.

The post The Perfect Code Coverage Score appeared first on NCover.

Categories: Companies

Late Community Update 2014-06-02 REST API, Visual Studio Update 3, data indexing, Project Orleans and more

Decaying Code - Maxime Rouiller - Thu, 08/07/2014 - 08:14

So I was at the MVP Open Days and I’ve missed a few days. It seems that my fellow MVP James Chambers has started a great initiative about exploring Bootstrap and MVC with lots of tips and tricks. Do not miss out!

Otherwise, this is your classic “I’ve missed a few days so here are 20,000 interesting links that you must read” kind of day.

Enjoy!

Must Read

AppVeyor - A good continuous integration system is a joy to behold - Scott Hanselman (www.hanselman.com)

This URL shortener situation is officially out of control - Scott Hanselman (www.hanselman.com)

James Chambers Bootstrap and MVC series

Day 0: Boothstrapping Mvc for the Next 30 Days | They Call Me Mister James (jameschambers.com)

Day 1: The MVC 5 Starter Project | They Call Me Mister James (jameschambers.com)

Day 2: Examining the Solution Structure | They Call Me Mister James (jameschambers.com)

Day 3: Adding a Controller and View | They Call Me Mister James (jameschambers.com)

Web Development

How much RESTful is your API | Bruno Câmara (www.bfcamara.com)

Data-binding Revolutions with Object.observe() - HTML5 Rocks (www.html5rocks.com)

ASP.NET

ASP.NET Moving Parts: IBuilder (whereslou.com)

Supporting only JSON in ASP.NET Web API - the right way - StrathWeb (www.strathweb.com)

Shamir Charania: Hacky In Memory User Store for ASP.NET Identity 2.0 (www.shamirc.com)

.NET

Missing EF Feature Workarounds: Filters | Jimmy Bogard's Blog (lostechies.com)

Visual Studio/Team Foundation Server 2013 Update 3 CTP1 (VS 2013.3.1 if you wish) (blogs.msdn.com)

TWC9: Visual Studio 2013 Update 3 CTP 1, Code Map, Code Lens for Git and more... (channel9.msdn.com)

.NET 4.5 is an in-place replacement for .NET 4.0 - Rick Strahl's Web Log (weblog.west-wind.com)

ASP.NET - Topshelf and Katana: A Unified Web and Service Architecture (msdn.microsoft.com)

Windows Azure

Episode 142: Microsoft Research project Orleans simplify development of scalable cloud services (channel9.msdn.com)

Tool

JSON to CSV (konklone.io)

Search Engines

The Absolute Basics of Indexing Data | Java Code Geeks (www.javacodegeeks.com)

Categories: Blogs

Testing the Test

Test Driven Developer - Thu, 08/07/2014 - 04:35
When practicing Test Driven Development, we use the 3 steps Red Green Refactor. The first step is meant to check the test itself, to make sure that it will actually fail when it's supposed to. I have found quite a few instances where tests have been written without this critical first step. The reason behind this step is to make sure we can trust the test itself, and if we can't be sure it fails when it should, then we probably shouldn't be including the test in our automation. If it passes when it should fail, then it is giving us false information and that is worse than no information. We stick to the recommended approach:

1. Write a test, it fails (no code to test yet) (Red)
2. Write just enough code to make the test pass (Green)
3. Refactor the code (AND the test code), keeping the test passing
3A. (This is where any simple logic would be written)
When satisfied, start over with a new test.

I find that step 3A is where we lose a lot of folks. Simple logic pretty much includes a calculation or deterministic algorithm. If there is any kind of branch statement - IF, SWITCH, etc. these should all be coded only as a result of new tests.

If we stay with this approach, we will have valuable information about the state of our code and a mechanism to make sure the code actually does what we wanted it to do.

Categories: Blogs

Cloud9 + Sauce Labs Integration: Learn How It Works [WEBINAR]

Sauce Labs - Thu, 08/07/2014 - 01:29

Sauce + C9 IntegrationEver wanted to develop and test applications directly from your browser? Cloud9 enables users to do just this using their powerful cloud-based development environment. With their recent release and new integration with Sauce Labs, users can now instantly test mobile and web apps across any browser that Sauce Labs supports – without ever leaving the Cloud9 interface.

Join us for our latest webinar showcasing the integration on Friday, August 29, at 11am PST.

Ruben Daniels, Cloud9′s founder and CEO, and Jonathan Lipps, Sauce Labs’ Director of Ecosystems and Integrations, will walk you through Cloud9′s setup and how to test and debug across multiple browsers and platforms with Sauce Labs.

This 30 minute webinar includes a Q&A. Click here to sign up today!

All registrants will receive a link to the recording and other assets following the webinar, regardless of attendance.

Categories: Companies

Germany Gears Up for SoCraTes 2014 Conference

uTest - Wed, 08/06/2014 - 22:32

The 4th International Software Craftsmanship and Testing (SoCraTes) Conference show kicks off in Soltau, Germany tomorrow and runs until August 10, 2014. What sets the SoCraTes show apart of other testing conferences is the emphasis on it being run using Open Space Technology (OST). OST is a way for hosting conferences that is “focused on a specific and important purpose or task—but beginning without any formal agenda, beyond the overall purpose or theme.”socrates2014

In this case, the event is about the sustainable creation of useful software in a responsible way and is a joint effort of all Softwerkskammer groups. The show includes hands-on coding sessions, sessions focused on discussion, and interactive talks.

You can get an idea of the schedule for this year’s show, as well as read about what happened at last year’s event from Florian Hopf, Samir Talwar, and others.

Follow tweets from this year’s SoCraTes event via their Twitter account @socrates_2014.

Want to know what other events are happening soon? Check out upcoming software testing events like SoCraTes 2014 on the uTest Events Calendar

Categories: Companies

Unit Test Execution in SonarQube

Sonar - Wed, 08/06/2014 - 15:26

Starting with Java Ecosystem version 2.2 (compatible with SonarQube version 4.2+), we no longer drive the execution of unit tests during Maven analysis. Dropping this feature seemed like such a natural step to us that we were a little surprised when people asked us why we’d taken it.

Contrary to popular belief we didn’t drop test execution simply to mess with people. :-) Actually, we’ve been on this path for a while now. We had previously dropped test execution during PHP and .NET analyses, so this Java-only, Maven-only execution was the last holdout. But that’s trivial as a reason. Actually, it’s something we never should have done in the first place.

In the early days of SonarQube, there was a focus on Maven for analysis, and an attempt to add all the bells and whistles. From a functional point of view, the execution of tests is something that never belonged to the analysis step; we just did it because we could. But really, it’s the development team’s responsibility to provide test execution reports. Because of the potential for conflicts among testing tools, the dev team are the only ones who truly know how to correctly execute a project’s test suite. And in the words of SonarSource co-founder and CEO, Olivier Gaudin, “it was pretentious of us to think that we’d be able to master this in all cases.”

And master it, we did not. So there we were, left supporting a misguided, gratuitous feature that we weren’t sure we had full test coverage on. There are so many different, complex surefire configuration cases to cover that we just couldn’t be sure we’d implemented tests for all of them.

Plus, This automated test execution during Java/Maven analysis had an ugly technical underbelly. It was the last thing standing in the way of removing some crufty, thorn-in-the-side, old code that we really needed to get rid of in order to be able to move forward efficiently. It had to go.

We realize that switching from test execution during analysis to test execution before analysis is a change, but it shouldn’t be an onerous one. You simply go from

mvn clean install
mvn sonar:sonar

to

mvn clean org.jacoco:jacoco-maven-plugin:prepare-agent install -Dmaven.test.failure.ignore=true
mvn sonar:sonar

Your analysis will show the same results as before, and we’re left with a cleaner code base that’s easier to evolve.

Categories: Open Source

Kanban and Recruitment

The Social Tester - Wed, 08/06/2014 - 15:00
We’ve been recruiting heavily for a while now and it’s been an epic journey from those early days 4 years ago when I started here at NewVoiceMedia to where we are today. The Dev team has grown very quickly indeed and we’re still recruiting for talented Developers to join us. I’ve been involved in pretty […]
Categories: Blogs

How to Spruce up your Evolved PHP Application – Part 2

In the first part of my blog I covered the data side of the tuning process on my homegrown PHP application Spelix: database issues, caching on both the server and the client. By just applying these insights I could bring Spelix to a stage where the number of users could be increased by more than […]

The post How to Spruce up your Evolved PHP Application – Part 2 appeared first on Compuware APM Blog.

Categories: Companies

Los Alamos National Laboratory receives new HPC system

Kloctalk - Klocwork - Wed, 08/06/2014 - 15:00

High performance computing continues to grow in popularity, providing research organizations and other firms with advanced capabilities that are simply not available through more conventional systems.

The latest organization to expand its use of HPC technology is the Los Alamos National Laboratory. The system, called Wolf, can operate at 197 teraflops per second and contains 19.7 terabytes of memory. Wolf users have access to 86.3 million central processing unit core hours per year.

"This machine modernizes our mid-tier resources available to Laboratory scientists," said Bob Tomlinson, a member of the Los Alamos National Laboratory's High Performance Computing group. "Wolf is a critical tool that can be used to advance many fields of science."

The Los Alamos National Laboratory declared that it will use Wolf to conduct research in areas such as climate science, astrophysics modeling and materials analysis. Such efforts will help ensure the laboratory remains a world leader in the fields of high performance computing and computational science, particularly in regard to national security issues.

HPC expanding
Implementations such as this one are increasingly common. A new IDC report found that the worldwide HPC market is expected to grow through 2018, THE Journal noted.

"HPC technical server revenues are expected to grow at a healthy rate because of the crucial role they play in economic competitiveness as well as scientific progress," said Earl Joseph, program vice president for technical computing at IDC, the news source reported. "As the global race toward exascale computing fuels the high end of the market, more small and medium-sized businesses and research organizations are exploiting HPC servers for advanced simulations and high performance data analysis."

As HPC grows, it’s imperative for organizations to adopt tools that are built to handle multiple CPUs and processes, and support common HPC platform architectures. Choosing scalable debugging tools from the outset, for example, will help reduce development times and shorten the time it takes to localize and fix problems on live systems.

Categories: Companies

Women - Pay and Workplace inequality

Yet another bloody blog - Mark Crowther - Wed, 08/06/2014 - 13:40
Despite great progress, women still experience a lot of pay and workplace inequality. In the last 10 to 15 years there's been a big push from government, companies and other organisations to reduce the level of inequality. Some approaches have worked well, others not so much. That's always the case of course, but at least the general effort has been towards a noble objective. Just in case your reaction is to think this isn't a big issue, let me assure you it is. It is both in and of itself, but also in context of the women's general situation.

There are many aspects of women's situation in the world that we should be shouting about and demanding change on. From the disproportionate level of oppression and violence women suffer the world over, to the many derogatory elements our language is littered with, there is an absolute shopping list of negative stuff that's directed at women, which should be so much in the depths of history they seem as ridiculous as women not having the vote or not being allowed to drive. Oh wait, depending on your country women still don't even have those rights!

Just to be clear and before anyone switches off in a possibly oh-so-predictable-way, I'm no 'feminist' or whatever the male version is. It's too much us-and-them and has had its time. We need a more inclusive 'something-ist' to describe the current state of affairs. Something that applies equally to men and women of all races, cultures, classes, education levels, etc. I have a rather utopian trans-humanism/ist perspective on how the future society could be if only we could 'dump the baggage', as a friend once summarised the needed action as. However, before we can arrive at the next step of this inclusive perspective, we need to address the glaring inequalities that women (and other 'groups') have to endure right now. A glaring in-your-face issue we are presented with everyday as technologists in our places of work, an issue we can call out easily, is pay and workplace inequality for our female colleagues. but what is it, why should we care and what can we do about it?

Let's talk pay, I suspect most people will agree women can easily find themselves earning less than men. It seems that typically a woman can expect to earn around 10% less than a man during her working life, when she gets to 50 that opens up to around 20% less. What might a lifetime's earning loss look like then? Let's keep the figures modest and easy to work with as this is an illustration not a scientific reckoning. Say she starts at 25 after Uni and job hunting, on £35k and gets to £50k by 50. that's an average of £42.5k per year x 25 years or £1,062,500 - hey not bad and she still has say 20 years to work. Oh except there's a few catches to this. Firstly she could have been earning an extra 10% if she was a man, another £106,250 which would be nice in the pension pot at least. Add to that 43% of women quit work when they have a family, if so that salary is cut by around 10 years, assuming they go back to work after that period. (*assuming you're leaving on your own and not one of the 30k women who get sacked for being pregnant each year).

That means our typical mum could be sacrificing and losing out on over £400k before she reaches 50. Turn it around, on reaching 50 the man has another £400k in his pocket (well... subject to tax etc). If the size of that amount isn't a shock to you (well done you for earning crap loads!), how about on your 50th birthday I give you a gift of £400k? Would that take your breath away? Wait a minute, added together then as a man, the above rough figures will see me earning £1.16m and my wife is going to expect to earn around £740k, assuming she can get back into work at the same pay grade after 10 years? Hmm... why do I doubt that. (Assuming women even earned this, 70% of people on the national minimum wage are, you guessed it... women)

But wait, at least 57% of women stay in work and keep earning... 10% less but hey, small fry compared to the above . Just to give a nice kicker to the story though, two more things. Women don't all quit work at 50. In fact there's been a rise in the numbers of 50+ working women. They get an extra insult by earning even less than men at this point. Typically up to 20% less. That's assuming they're fit to work, as disability and long term illness, that prevents her working, hits women worse than men when reaching 50. According to the TUC, 3 in 5 women over 50 are in work, with most earning less than £10k per year through part time jobs. In other words, 40% of women 50+ are not in work, those that are, many earn under £1k per month. It's estimated there are around 2.8 million part time working women 'employed below their potential'. Great, just as the run-up to retirement really begins. To which... the extra kicker.

Employer pensions typically pay an amount of your salary towards your pension. It may be that you pay a % and they match it or top it up to a %. For women we already just discussed they get about 10% less salary, so the overall contribution to pensions during their working lives, mums or not, is down 10%. Go Google 'pension poverty women' for more on the disaster zone that women's pensions are. Well don't worry, you only need 25 to 30 years of National Insurance contributions from your work life to qualify for the full state pension, oh except you had kids and now 40% of women aren't in work over 50... tricky. Does anyone else feel like swearing yet?

Even with my crappy maths, that fact that women are getting a raw deal just on pay are not in question. I don't have the answer to the above, it's complex and there's not single 'thing' to change. But, at the very least, when women are in work they should be getting equal pay. The 'you chose to have a family' argument I can roll with, but at least enable women to get employed and earn with equality when they are working, at whatever age. I can't believe that in 2014 we're even talking about this!

As fellow technologists we need to make it clear that in our workplaces we absolutely will not accept anything less that complete equality. In fact out of fairness I'd go for inequality in favour of women. Make it clear to your employers it is a matter of concern for you and you want to know what they're doing about it. If you run your own business take steps to address this.

As part of the community, support women in tech by demanding equality in pay, career development and access to roles. Whatever you can, one battle at a time is better than doing nothing!

I'll leave it here, we've not even covered workplace inequality. Blimey, big topics.

Mark

https://www.nomisweb.co.uk/census/2011/DC3302EW/view/2092957703?rows=c_sex&cols=c_age

http://www.thisismoney.co.uk/money/pensions/article-2521260/Almost-twice-men-107-basic-state-pension-women.html

http://www.nidirect.gov.uk/how-your-state-pension-is-worked-out

http://ukfeminista.org.uk/take-action/facts-and-statistics-on-gender-inequality/

http://www.tuc.org.uk/equality-issues/gender-equality/gender-pay-gap-twice-large-women-their-50s






Categories: Blogs

Ruby Basics - Wrap Up 1

Yet another bloody blog - Mark Crowther - Wed, 08/06/2014 - 01:35
Hey All!

Well here we are, 15 videos in and already at a wrap up of what we've learned so far!

In this post, let's do a code walk-through of a script that includes the elements we've covered in the rough order of the videos. Be sure to check out the Ruby Playlist on YouTube if you've not done so already. Grab a copy of the script here



YouTube Channel: [WATCH, RATE, SUBSCRIBE]http://www.youtube.com/user/Cyreath

The first thing to look at is getting user input and assigning the data to local variables. Here we do a basic puts and using a local variable called userFirstName we assign it a value the user enters. On this value we call the string class methods of chompdowncase and capitalize. In the second set of lines, we use #{interpolation} and call a string method on that too. Quite a lot in 4 lines!

# Let's declare our LOCAL variables and get some values straight away (Video 2 and 5)
puts "Welcome, what's your first name? (Dave, Alan or yours)"
userFirstName = gets.chomp.downcase

puts "Hi, #{userFirstName.capitalize}. What's your surname?"
userLastName = gets.chomp.capitalize

Next we assign a global variable so we have access to it anywhere in our script, pretty similar to what we've just done. Global variables are identified by the $ symbol.

# Let's create an example GLOBAL variable (Video 6)
$globalNameText = "User name is: "

While we're looking at variables, let's declare an instance variable and so some concatenation. You'll recall that instance variables are identified by using the @ symbol. (We'll look at Class @@ later). We also snuck in another string method, upcase.

Here there are two ways to concatenate text, using either the + symbol or << symbol. Ruby docs tell us that + creates a new string, where as << appends the string to whatever precedes it. In terms of speed and memory usage this could be significant on large data sets! Let's be clear, userFirstName + " " creates a new string that now includes a space. Doing << UserLastName just adds it to the existing string.

Can you design a test to prove this?

# Now we'll make an INSTANCE variable from the user name (Video 7)
@userFullname = userFirstName.upcase + " " << userLastName
#upcase is also string method

Next we define a Constant, which we do by using all uppercase. Age isn't a good exmaple but it works for our purposes :) As we're getting a number, we don't use the usual downcase, as that wouldn't make sense. Try it and see what happens.

# Finally let's get a CONSTANT in use (Video 8)
puts "#{userFirstName.capitalize}, how old are you?"
USER_AGE = gets.chomp #we UPPERCASE Constants to differentiate against variables

We print out a message using our Global and Instance variables, showing concatenation works just fine on these too.

# Put the correctly formatted name to screen
puts $globalNameText + @userFullname

As we want to have data already stored for our program to use, we next create a simple array of data, just with strings in this example.

# set up an array with the template roles (Video 9)
rolesArray = ["Developer", "Tester"]

Now that we have our data, let's do some evaluation of it and decide what the outcome of that should be. The first step is a Case statement, against whatever names the user entered above.

Here, we respond to the user with data we've pulled out of the array we just set up. You'll recall that we index into arrays starting at position 0. So when we run the script, Dave will be assigned the role at position 0, a Developer.

# Depending on what the users first name is we'll respond with more details (Video 10)
case userFirstName
when "dave"
puts "#{userFirstName.capitalize} you are a #{USER_AGE} year old #{rolesArray[0]}"
userRole = rolesArray[0]

when "alan"
puts "#{userFirstName.capitalize} you are a #{USER_AGE} year old #{rolesArray[1]}"
userRole = rolesArray[1]

else
puts "You must be a new member of staff, welcome!"

end

Next, we'll ask about a career change and instead of using a Case statement to control the flow of our response, we'll use a basic if and then a nested if statement.

As we've discussed before, we can evaluate the response using regular expressions or direct evaluation.

# Here we use a nested IF to check for career changes (Video 12)
puts "Do you want a change of career? (Yes or Y or No or N)"
careerChange = gets.chomp.downcase

if careerChange =~ /\A(yes|y)\z/ then

if userFirstName == "dave" # (Video 11)
puts "#{userFirstName.capitalize}, you are a now a #{rolesArray[1]}"
userRole = rolesArray[1]

elsif userFirstName == "alan" or userFirstName == "Alan"
puts "#{userFirstName.capitalize}, you are now a #{rolesArray[0]}"
userRole = rolesArray[0]

else
puts "Easy #{userFirstName.capitalize}, you just joined us!"

end

elsif careerChange =~ /\A(no|n)\z/ then
puts "Great, keep up the good work!"
To finish off the main body of the example script we'll now use a While statement, to keep asking a question until a condition is met. We've also snuck in an If ternary to decide how to respond if the hoursWorked value is under 8.

# Now we'll check if the user has done a days work! 
hoursWorked = 0
userRecord = [userFirstName, userLastName, userRole, USER_AGE]

while hoursWorked < 8 # we could do an UNTIL hoursWorked == 8 here instead # (Video 13)
puts "How many more hours have you now worked #{userFirstName}? (enter 0 to 8)"
puts "total hrs worked is so far: #{hoursWorked}"

hoursWorked = hoursWorked + gets.chomp.to_i

hoursWorked < 8 ? (puts "Keep going, the day's not over!") : (puts "Well done, go home and relax.") #example of if Ternary (Video 13)

end

Just to finish, let's add a new data element to the end of the UserRecord array and print the results.

userRecord.push hoursWorked #Pushing to an array (Video 14)
puts userRecord

Don't forget to grab a copy of the script and play through it yourself!

Mark.


Read Morehttp://cyreath.blogspot.co.uk/2014/07/ruby-array-adding-and-removing-data.html
http://cyreath.blogspot.co.uk/2014/07/ruby-if-ternary-until-while.htmlhttp://cyreath.blogspot.co.uk/2014/07/ruby-nested-if-statements.htmlhttp://cyreath.blogspot.co.uk/2014/07/ruby-if-statements.htmlhttp://cyreath.blogspot.co.uk/2014/07/ruby-case-statements.htmlhttp://cyreath.blogspot.co.uk/2014/05/ruby-w-vs-w-secrets-revealed.html
http://cyreath.blogspot.co.uk/2014/05/ruby-variables-and-overview-of-w-or-w.html
http://cyreath.blogspot.co.uk/2014/02/ruby-constants.htmlhttp://cyreath.blogspot.co.uk/2014/02/ruby-global-variables.htmlhttp://cyreath.blogspot.co.uk/2014/02/ruby-local-variables.htmlhttp://cyreath.blogspot.co.uk/2014/02/ruby-variables-categories-and-scope.htmlhttp://cyreath.blogspot.co.uk/2014/01/ruby-variables-part-1.htmlhttp://cyreath.blogspot.co.uk/2014/01/ruby-getting-and-using-user-input.htmlhttp://cyreath.blogspot.co.uk/2014/01/download-and-install-ruby.html




YouTube Channel: [WATCH, RATE, SUBSCRIBE]http://www.youtube.com/user/Cyreath
Categories: Blogs

You mean it’s now possible to find the gaps and areas of risk in my QA process?

The Kalistick Blog - Tue, 08/05/2014 - 19:11

After a recent number of meetings where we’d positioned the Coverity Test Advisor – QA solution, it’s become apparent that organizations just don’t know how to optimize their testing and Quality Assurance processes to expose the untested part of their code. On the other hand, that’s of no surprise as up until now there hasn’t been anything to help in this endeavor. I’m not talking code coverage – I’m talking code intelligence!

What if you could eliminate the areas of risk by capturing the test footprint and highlight the code that wasn’t tested…wouldn’t that be hugely impactful? What if you could test faster by prioritizing testing based on the impact of change…wouldn’t that speed things up? There’s clearly excitement building with our customers in that we can now deliver a technology that fully addresses these questions and perhaps finally delivers a solution that QA teams have been looking for. Test Faster, Test Smarter is the way to go…

Click here to learn more!

The post You mean it’s now possible to find the gaps and areas of risk in my QA process? appeared first on Software Testing Blog.

Categories: Companies

Automating CD pipelines with Jenkins - Part 2: Infrastructure CI and Deployments with Chef

This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Tracy Kennedy, solutions architect, CloudBees, about a presentation given by Dan Stine, Copyright Clearance Center at JUC Boston.

In a world where developers are constantly churning code changes and Jenkins is building those changes daily, there is also a need to spin up test environments for those builds in an equally fast fashion.

To respond to this need, we’re seeing a movement towards treating “infrastructure as code.” This goes beyond simple BAT files and shell scripts -- instead, “infrastructure as code” means that you can automate the configurations for ALL aspects of your environment, including the infrastructure and the operating system layers, as well as infrastructure orchestration with tools like Chef, Ansible and Puppet.

These tools’ automation scripts are version controlled like the application code, and can even be integrated with the application code itself.

While configuration management tools date back to at least the 1970s, this way of treating infrastructure code like application code is much newer and can be traced to at least CFEngine in the 90s. Even then, these declarative configuration tools didn’t start gaining popularity until late 2011:

Screen Shot 2014-07-30 at 2.40.09 PM.png

Screen Shot 2014-07-30 at 2.42.22 PM.pngScreen Shot 2014-07-30 at 2.42.05 PM.png

Screen Shot 2014-07-30 at 2.43.35 PM.png


Infrastructure CIThis rise of infrastructure code has created a new use case for Jenkins: as a CI tool for an organization’s infrastructure.

At the 2014 Boston Jenkins User Conference, Dan Stine of the Copyright Clearance Center presented how he and his organization met this challenge. According to Stine, the Copyright Clearance Center’s platform efforts began back in 2011. They saw “infrastructure as code” as an answer to the plight of their “poor IT ops guy,” who was being forced to deploy and manage everything manually.

Stine compared the IT ops guy to the infamous “Brent” of The Phoenix Project: all of their deployments hinged on him, and he became overwhelmed by the load and became the source of their bottlenecks.

To solve this problem, they set two goals to improve their deployment process:
1. Reduce effort2. Improve speed, reliability and frequency of deploymentsJenkins and Chef
As for the tools to accomplish this, the organization specifically picked Jenkins and Chef, as they were already familiar and comfortable with Jenkins, and knew both tools had good communities behind them.They also used Jenkins to coordinate with Liquibase to execute schema updates, since Jenkins is a good general purpose job executor.

They installed the Chef client onto nodes they registered on their Chef server. The developers would then write code on their workstations and use tools like Chef’s “knife” to interact with the server.

Their Chef code was stored in GitHub, and they pushed their Cookbooks to the Chef server.

For Jenkins, they would give each application group their own Cookbook CI job and Cookbook release job, which would be run by the same master as the applications’ build jobs. The Cookbook CI jobs ran any time that new infrastructure code was merged.

They also introduced a new class of slaves, which had the required RubyGems installed for the Cookbook jobs and Chef with credentials for the Chef server.

Cookbook CI Jobs and Integration Testing with AWSThe Cookbook CI jobs first prompt static analysis of the code’s syntax with JSON, Ruby and Chef, followed by integration testing using the kitchen-ec2 plugin to spin up an EC2 instance in a way that would mimic the actual deployment topology for an application.

Each EC2 instance was created from an Amazon Machine Image that was preconfigured with Ruby and Chef, and each instance was tagged for traceability purposes. Stine explained that they would also run chef-solo on each instance to avoid having to connect ephemeral nodes to their Chef server.

Cookbook Release Jobs
The Cookbook release jobs were conversely triggered manually. They ran the same tests as the CI jobs, but would upload new Cookbooks to the Chef server.

Application Deployment with ChefFrom a workstation, code would be pushed to the Chef repo on GitHub. This would then trigger a separate Jenkins master dedicated to deployments. This deployment master would then pull the relevant data bags and environments from the Chef server. The deployment slaves kept the SSH keys for the deployment nodes, along with the required gems and Chef with credentials.

Stine then explained the two deployment job types for each application:

1. DEV deploy for development2. Non-DEV deploy for operations

Screen Shot 2014-07-30 at 3.43.34 PM.png
Non-dev jobs took an environment job parameters to define where the application would be deployed to, while both took application group version numbers. These deployment jobs would edit application data bags and application environment files before uploading them to the Chef server, find all nodes in the specified environment with the deploying app’s recipes, run the Chef client on each node and send an email notification of the result of the deployment.


Click here for Part 1.


Tracy Kennedy
Solutions Architect
CloudBees

As a solutions architect, Tracy's main focus is reaching out to CloudBees customers on the continuous delivery cloud platform and showing them how to use the platform to its fullest potential. (A Meet the Bees blog post about Tracy is coming soon!) For now, follow her on Twitter.
Categories: Companies

Appium Bootcamp – Chapter 5: Writing and Refactoring Your Tests

Sauce Labs - Tue, 08/05/2014 - 17:05

appium_logoThis is the fifth post in a series called Appium Bootcamp by noted Selenium expert Dave Haeffner

Read:  Chapter 1 Chapter 2 | Chapter 3 | Chapter 4 | Chapter 5 | Chapter 6 | Chapter 7 | Chapter 8

Dave recently immersed himself in the open source Appium project and collaborated with leading Appium contributor Matthew Edwards to bring us this material. Appium Bootcamp is for those who are brand new to mobile test automation with Appium. No familiarity with Selenium is required, although it may be useful. This is the fifth of eight posts; two new posts will be released each week.

Now that we’ve identified some test actions in our apps, let’s put them to work by wiring them up in code.

We’ll start with the iOS app and then move onto Android. But first, we’ll need to do a quick bit of setup.

Quick Setup

Since we’re setting up our test code from scratch, we’ll need to make sure we have the necessary gems installed — and done so in a way that is repeatable (which will come in handy for other team members and for use with Continuous Integration).

In Ruby, this is easy to do with Bundler. With it you can specify a list of gems and their versions to install and update from for your project.

Install Bundler by running gem install bundler from the command-line and then create a file called Gemfile with the following contents:

# filename: Gemfile

source 'https://rubygems.org'

gem 'rspec', '~> 3.0.0'
gem 'appium_lib', '~> 4.0.0'
gem 'appium_console', '~> 1.0.1'

After creating the Gemfile run bundle install. This will make sure rspec (our testing framework), appium_lib (the Appium Ruby bindings), and appium_console (our interactive test console) are installed and ready for use in this directory.

Capabilities

In order to run our tests, we will need to specify the capabilities of our app. We can either do this in our test code, or we can leverage the appium.txt files we used for the Appium Console.

Let’s do the latter approach. But first, we’ll want to create two new folders; one for Android and another for iOS. Once they’re created, let’s place each of the appium.txt files into their respective folders.

├── Gemfile
├── Gemfile.lock
├── android
│   └── appium.txt
└── ios
    └── appium.txt

Be sure to update the app capability in your appium.txt files if you’re using a relative path.

Writing Your First Test

With our initial setup taken care of, let’s create our first test file (a.k.a. “spec” in RSpec). The test actions we identified in the previous post were focused on navigation in the app. So let’s call this spec file navigation_spec.rband place it in the ios folder.

├── Gemfile
├── Gemfile.lock
├── android
│   └── appium.txt
└── ios
    └── appium.txt
    └── navigation_spec.rb

Now let’s write our test to launch Appium for iOS and perform a simple navigation test.

In RSpec, describe denotes the beginning of a test file, whereas it denotes a test. So what we have is a test file with a single test in it.

In this test file, we are starting our Appium session before each test (e.g., before(:each)) and ending it after each test (e.g., after(:each)). More specifically, in before(:each), we are finding the path to the iOSappium.txt file and then loading it. After that we start the Appium session and promote the Appium commands so they will be available for use within our test. We then issue driver_quit in after(:each) to cleanly end the Appium session. This is equivalent to submitting an x command in the Appium console.

The commands in our test (it 'First cell' do) should look familiar from the last post. We’re finding the first cell, grabbing it’s title, click on the cell, and then looking to see if the title appeared on the inner screen.

After saving this file, let’s change directories into the ios folder (e.g., cd ios), and run the test (assuming your Appium Server is running — if not, load up the Appum GUI and click Launch) with rspec navigation_spec.rb. When it’s running, you will see the iOS simulator launch, load up the test app, click the first cell, and then close.

This is a good start, but we can clean this code up a bit by leveraging some simple page objects and a central configuration.

A Page Objects Primer

Automated tests can quickly become brittle and hard to maintain. This is largely due to the fact that we are testing functionality that will constantly change. In order to combat this, we can use page objects.

Page Objects are simple objects that model the behavior of an application. So rather than writing your tests directly against your app, you can write them against these objects. This will make your test code more reusable, maintainable, and easier to fix when the app changes.

You can learn more about page objects here and here.

Refactoring Your First Test

Let’s create a new directory called pages within our ios directory and create two new files in it: home.rb and inner_screen.rb. And while we’re at it, let’s create a new folder to store our test files (called spec – which is a folder RSpec will know to look for at run time) and move our navigation_spec.rb into it.

├── Gemfile
├── Gemfile.lock
├── android
│   └── appium.txt
└── ios
    ├── appium.txt
    ├── pages
    │   ├── home.rb
    │   └── inner_screen.rb
    └── spec
        ├── navigation_spec.rb

Let’s open up ios/pages/home.rb to create our first page object.

 

# filename: ios/pages/home.rb

module Pages
  module Home
    class << self

      def first_cell
        @found_cell = wait { text 2 }
        self
      end

      def title
        @found_cell.name.split(',').first
      end

      def click
        @found_cell.click
      end

    end
  end
end

module Kernel
  def home
    Pages::Home
  end
end

Since the Appium commands are getting promoted for use (instead of passing around a driver object), storing our page objects in a module is a cleaner approach (rather than keeping them in a class that we would need to instantiate).

To create the Home module we first wrap it in another module called Pages. This helps prevent any namespace collisions as well simplify the promotion of Appium methods.

In Home, we’ve created some simple static methods to mimic the behavior of the home screen (e.g., first_cell, title, click). By storing the found cell in an instance variable (e.g., @found_cell) and returning self, we will be able to chain these methods together in our test (e.g., first_cell.title). And in order to cleanly reference the page object in our test, we’ve made the home method available globally (which references this module).

Now let’s open up ios/pages/inner_screen.rb and create our second page object.

# filename: pages/inner_screen.rb

module Pages
  module InnerScreen
    class << self

      def has_text(text)
        wait { text_exact text }
      end

    end
  end
end

module Kernel
  def inner_screen
    Pages::InnerScreen
  end
end

This is the same structure as our previous page object. In it, we’re performing an exact text search.

Let’s go ahead and update our test to use these page objects.

# filename: ios/spec/navigation_spec.rb

require 'appium_lib'
require_relative '../pages/home'
require_relative '../pages/inner_screen'

describe 'Home Screen Navigation' do

  before(:each) do
    appium_txt = File.join(Dir.pwd, 'appium.txt')
    caps = Appium.load_appium_txt file: appium_txt
    Appium::Driver.new(caps).start_driver
    Appium.promote_appium_methods RSpec::Core::ExampleGroup
    Appium.promote_singleton_appium_methods Pages
  end

  after(:each) do
    driver_quit
  end

  it 'First cell' do
    cell_title = home.first_cell.title
    home.first_cell.click
    inner_screen.has_text cell_title
  end

end

We first require the page objects (note the use of require_relative at the top of the file). We then promote the Appium methods to our page objects (e.g., Appium.promote_singleton_appium_methods Pages). Lastly, we update our test.

Now when we run our test from within the ios directory (e.g., cd ios then rspec) then it will run just the same as it did before.

Now the test is more readable and in better shape. But there is still some refactoring to do to round things out. Let’s pull our test setup out of this test file and into a central config that we will be able to leverage for both iOS and Android.

Central Config

In RSpec, we can configure our test suite from a central location. This is typically done in a file called spec_helper.rb. Let’s create a folder called common in the root of our project and add a spec_helper.rb file to it.

├── Gemfile
├── Gemfile.lock
├── android
│   └── appium.txt
├── common
│   └── spec_helper.rb
└── ios
    ├── appium.txt
    ├── pages
    │   ├── home.rb
    │   └── inner_screen.rb
    └── spec
        ├── navigation_spec.rb

Let’s open up common/spec_helper.rb, add our test setup to it, and polish it up.

 

# filename: common/spec_helper.rb

require 'rspec'
require 'appium_lib'

def setup_driver
  return if $driver
  caps = Appium.load_appium_txt file: File.join(Dir.pwd, 'appium.txt')
  Appium::Driver.new caps
end

def promote_methods
  Appium.promote_singleton_appium_methods Pages
  Appium.promote_appium_methods RSpec::Core::ExampleGroup
end

setup_driver
promote_methods

RSpec.configure do |config|

  config.before(:each) do
    $driver.start_driver
  end

  config.after(:each) do
    driver_quit
  end

end

After requiring our requisite libraries, we’ve created a couple of methods that get executed when the file is loaded. One is to setup (but not start) Appium and another is to promote the methods to our page objects and tests. This approach is taken to make sure that only one instance of Appium is loaded at any one time.

We then configure our test actions so they run before and after each test. In them we are starting an Appium session and then ending it.

In order to use this central config, we will need to require it (and remove the unnecessary bits) in our test.

# filename: ios/spec/navigation_spec.rb

require_relative '../pages/home'
require_relative '../pages/inner_screen'
require_relative '../../common/spec_helper'

describe 'Home Screen Navigation' do

  it 'First cell' do
    cell_title = home.first_cell.title
    home.first_cell.click
    inner_screen.has_text cell_title
  end

end

Note the order of the require_relative statements – they are important. We need to load our page objects before we can load our spec_helper, or else the test won’t run.

If we run the tests from within the ios directory with rspec, we can see everything execute just like it did before.

Now that we have iOS covered, let’s wire up an Android test, some page objects, and make sure our test code to supports both devices.

Including Android

It’s worth noting that in your real world apps you may be able to have a single set of tests and segmented page objects to help make things run seamlessly behind the scenes for both devices. And while the behavior in our Android test app is similar to our iOS test app, it’s design is different enough that we’ll need to create a separate test and page objects.

Let’s start by creating spec and pages folders within the android directory and then creating page objects in pages (e.g., home.rb and inner_screen.rb) and a test file in spec (e.g., navigation_spec.rb).

├── Gemfile
├── Gemfile.lock
├── android
│   ├── appium.txt
│   ├── pages
│   │   ├── home.rb
│   │   └── inner_screen.rb
│   └── spec
│       ├── navigation_spec.rb
├── common
│   └── spec_helper.rb
└── ios
    ├── appium.txt
    ├── pages
    │   ├── home.rb
    │   └── inner_screen.rb
    └── spec
        ├── navigation_spec.rb

Now let’s open and populate our page objects and test file.

module Pages
  module Home
    class << self

      def first_cell
        @found_cell = wait { text 2 }
        self
      end

      def click
        @found_cell.click
      end

    end
  end
end

module Kernel
  def home
    Pages::Home
  end
end

This page object is similar to the iOS one except there’s no title search (since we won’t be needing it).

module Pages
  module InnerScreen
    class << self

      def has_text(text)
        wait { find_exact text }
      end

    end
  end
end

module Kernel
  def inner_screen
    Pages::InnerScreen
  end
end

In this page object we’re performing a search for an element by text (similar to the iOS example), but using find_exact instead of text_exact because of how the app is designed (we need to perform a broader search that will search across multiple attributes, not just the text attribute).

Now let’s wire up our test.

require_relative '../pages/home'
require_relative '../pages/inner_screen'
require_relative '../../common/spec_helper'

describe 'Home Screen Navigation' do

  it 'First cell' do
    home.first_cell.click
    inner_screen.has_text 'Accessibility Node Provider'
  end

end

Now if we cd into the android directory and run our test with rspec it should launch the Android emulator, load the app, click the first cell, and then end the session. The emulator will remain open, but that’s something we’ll address in a future post.

One More Thing

If we use the console with the code that we have right now, we won’t be able to reference the page objects we’ve created — which will be a bit of a pain if we want to reference them when debugging test failures. Let’s fix that.

Let’s create a new file in our android/spec and ios/spec directories called requires.rb. We’ll move our require statements out of our test files and into these files instead.

├── Gemfile
├── Gemfile.lock
├── android
│   ├── appium.txt
│   ├── pages
│   │   ├── home.rb
│   │   └── inner_screen.rb
│   └── spec
│       ├── navigation_spec.rb
│       └── requires.rb
├── common
│   └── spec_helper.rb
└── ios
    ├── appium.txt
    ├── pages
    │   ├── home.rb
    │   └── inner_screen.rb
    └── spec
        ├── navigation_spec.rb
        └── requires.rb

Here’s what one of them should look like:

# filename: ios/spec/requires.rb

# require the ios pages
require_relative '../pages/home'
require_relative '../pages/inner_screen'

# setup rspec
require_relative '../../common/spec_helper'

Next, we’ll want to update our tests to use this file.

require_relative 'requires'

describe 'Home Screen Navigation' do

  it 'First cell' do
    cell_title = home.first_cell.title
    home.first_cell.click
    inner_screen.has_text cell_title
  end

end
# filename: android/spec/navigation_spec.rb

require_relative 'requires'

describe 'Home Screen Navigation' do

  it 'First cell' do
    home.first_cell.click
    inner_screen.has_text 'Accessibility Node Provider'
  end

end

Now that we have a central requires.rb for each device, we can tell the Appium Console to use it. To do that, we’ll need to add some additional info to our appium.txt files.

 

# filename: ios/appium.txt

[caps]
deviceName = "iPhone Simulator"
platformName = "ios"
app = "../../../apps/UICatalog.app.zip"

[appium_lib]
require = ["./spec/requires.rb"]
# filename: android/appium.txt

[caps]
platformName = "android"
app = "../../../apps/api.apk"
avd = "training"
deviceName = "Android"

[appium_lib]
require = ["./spec/requires.rb"]

This new require value is only used by the Appium Console. Now if we run arc from either the ios or android directories, we’ll be able to access the page objects just like in our tests.

And if we run our tests from either directory, they will still work as directed.

Outro

Now that we have our tests, page objects, and central configuration all sorted, it’s time to look at wrapping our test execution and make it so we can run our tests in the cloud.

Read:  Chapter 1 Chapter 2 | Chapter 3 | Chapter 4 | Chapter 5 | Chapter 6 | Chapter 7 | Chapter 8

About Dave Haeffner: Dave is a recent Appium convert and the author of Elemental Selenium (a free, once weekly Selenium tip newsletter that is read by thousands of testing professionals) as well as The Selenium Guidebook (a step-by-step guide on how to use Selenium Successfully). He is also the creator and maintainer of ChemistryKit (an open-source Selenium framework). He has helped numerous companies successfully implement automated acceptance testing; including The Motley Fool, ManTech International, Sittercity, and Animoto. He is a founder and co-organizer of the Selenium Hangout and has spoken at numerous conferences and meetups about acceptance testing.

Follow Dave on Twitter - @tourdedave

Categories: Companies

Knowledge Sharing

Telerik Test Studio is all-in-one testing solution that makes software testing easy. SpiraTest is the most powerful and affordable test management solution on the market today