Skip to content

Feed aggregator

I’ve got integration and system tests, why do I need unit tests?

James Grenning’s Blog - Mon, 08/11/2014 - 22:12

Kent Beck told me years ago, if the code does not need to work, then there is no need to test it. He continued and observed, why bother writing it if it does not need to work. Since hearing that and discovering how frequently I make coding mistakes, I want thorough tests.

Maybe you are asking yourself “I’ve got integration and system tests, why do I need unit tests?”. The answer is simple, simple math.

The only practical way to know that every line of code is doing what you think it should do, is to have unit tests that exercise and check each line. Higher level tests, because of the number of combinations, can not practically test each line as the system grows larger and object interactions increase. For example, how many tests are be needed to thoroughly test these three objects if you test them together?

Did you do the math? It requires multiplication. You would need thousand tests to test this simple network of objects. In most cases, it is not practical. Who would bother to spend the time to write 1000 tests for such a simple system? (Maybe someone in medical electronics, aviation or space travel.)

No consider a unit test strategy for the same three objects.

The numbers game works in favor of unit testing. Addition is needed instead of multiplication to calculate the test count. You need a total of 30 unit tests to fully test each object. Then you’d be smart to write the necessary higher level tests to check each interface is being used properly and the system is meeting its requirements.

Its not a matter of needing one and not the other. Unit tests and higher level tests are both needed. They just serve different purposes. Unit tests tell the programmer that the code is doing what they think it should do. They write those tests. Higher level tests (by whatever name you like: BDD, ATDD, Acceptance, System, integration, and load tests) cannot be thorough, but they demonstrate that the system is meeting its requirements.

Categories: Blogs

Announcing CircleCI Integration on Sauce Labs

Sauce Labs - Mon, 08/11/2014 - 21:15

Last week our friends at CircleCI showed us how to securely run cross-browser front-end tests on the Sauce Labs cloud using their hosted Continuous Integration service. We’ve long been advocates of good continuous integration practices and have developed a few plugins for some of the more common CI servers out there. We’re super excited to add CircleCI to our list and even more excited about how easy it is to get it going!

Continuous Integration in the Cloud

Continuous Integration, if you don’t already know, is the process of building your app frequently from a shared source code repository and running tests to make sure everything works. If your tests don’t pass and the build is not successful, the code that was checked in since the last good build is where the defects were introduced, and so problems are much easier to find and fix quickly.

Maintaining a local CI server can be a hassle. Anyone who’s spent any considerable time configuring Jenkins jobs with all it’s various plugins and tasks can tell you all about it. CircleCI, on the other hand, integrates directly with GitHub and can actually *infer* necessary settings directly from your code (if you’re following good development practices for that language and environment) and so many projects just magically build themselves on CircleCI without any additional configuration on your part. It’s like three clicks from zero to CI. Pretty sweet! If you do need to tweak or customize any settings, you can easily do so by describing those settings in a circle.yml file placed in your repo.

Running Tests on Sauce Labs Browsers

Sauce Labs is the world’s largest cross-browser client cloud. We host over 375 different configurations of operating systems and browsers so you can ensure that your app works on all the specific platforms and versions you need to support. These days that’s an ever-growing list! So it makes sense to run these tests with your continuous integration process so you know things work across the board and you don’t end up spending a bunch of time and trouble trying to hunt down bugs that were introduced much earlier in the development cycle.

Now, if your build deploys your code to a publicly accessible environment, CircleCI will simply execute your Selenium tests and you probably won’t need to configure anything, since Sauce Labs browsers will be able connect to that environment over the public network. However, if you want CircleCI to execute your tests locally in it’s build containers, you’ll need to use Sauce Connect.

Sauce Connect is a secure tunneling utility which allows you to execute tests behind firewalls via an encrypted connection between Sauce Labs and your testing environment. When you run Sauce Connect yourself, you typically do it from a command line or shell script and supply your Sauce Labs account credentials as parameters. With a CircleCI build, you’ll need to set it up in the circle.yml file so it can be launched as part of the build process and those tests can run inside the local container.

All that’s really involved here is telling the build task where to find Sauce Connect and how to start it up with your account credentials.

The first part is downloading and unpacking the Sauce Connect file, which you add as a custom dependency entry in your circle.yml:

	dependencies:
	  post:
		   - wget https://saucelabs.com/downloads/sc-latest-linux.tar.gz
		   - tar -xzf sc-latest-linux.tar.gz

The second part is to add your credentials, launch the tunnel, and check that it’s running before kicking off the tests. You’ll put these lines in the `test` section of circle.yml:

test:
     override:
        - ./bin/sc -u $SAUCE_USERNAME -k $SAUCE_ACCESS_KEY -f ~/sc_ready:
            background: true
            pwd: sc-*-linux
        # Wait for tunnel to be ready
        - while [ ! -e ~/sc_ready ]; do sleep 1; done

That’s all there is to it. You can find out the details here and see a full example on GitHub. And CircleCI has a nice little utility to help you add your credentials as environment variables so that they are not visible as plain text in the repo.

With CircleCI tackling all the necessary work involved in supporting your continuous integration process and Sauce Labs hosting the world’s most extensive cross-browser client cloud, you can be free of the costs and hassles of managing all these systems and grids and get back to focusing on the business of making great software.

- Michael Sage, Principal Technology Evanglist, Sauce Labs

Michael Sage is a Principal Technology Evangelist at Sauce Labs. He helps software teams develop, deliver, and care for great apps. He’s lived through two or three technology revolutions and expects a few more to come, and finds the prospect incredibly exciting. A native of Philadelphia, he’s made San Francisco his home for over 25 years, but still can’t find a good hoagie there.

Categories: Companies

User Interface Refresh

This is a guest post from Tom Fennelly

Over the last number of weeks we've been trying to "refresh" the Jenkins UI, modernizing the look and feel a bit. This has been a real community effort, with collaboration from lots of people, both in terms of implementation and in terms of providing honest/critical feedback. Lots of people deserve credit but, in particular, a big thanks to Kevin Burke and Daniel Beck.

You're probably familiar with how the Jenkins UI currently looks, but for the sake of comparison I think it's worth showing a screenshot of the current/old UI alongside a screnshot of the new UI.

Current / Old Look & Feel

New Look & Feel

Among other things, you'll see:

  • A new responsive layout based on <div> elements (as opposed to <table> elements). Try resizing the screen or viewing on a smaller device. More to come on this though, we hope.
  • Updated default font from Verdana to Helvetica.
  • Nicer form elements and nicer buttons.
  • Smoother side panels e.g. Build Executors, Build Queues and Build History panes.
  • Smoother project views with more modern tabs.

You might already be seeing these changes if you're using the latest and greatest code from Jenkins. If not, you should see them in the next LTS release.

We've been trying to make these changes without breaking existing features and plugins and, so far, we think we've been successful but if you spot anything you think we might have had a negative effect on, then please log a JIRA and we'll try to address it.

One thing we've "sort of" played with too is cleaning up of the Job Config page - breaking into sections and making it easier to navigate etc. This is a big change and something we've been shying away from because of the effect it will have on plugins and form submission. That said, I think we'll need to bite the bullet and tackle this sooner or later because it's a big usability issue.

Categories: Open Source

Part 1 – [ ________ ] is the Best Policy

Sonatype Blog - Mon, 08/11/2014 - 17:44
Open source has been around for donkey’s years but until recently the persuasive argument of “many eyeballs” was the guiding policy when using open source. In comes the recent industry shock wave we all know as Heartbleed and now many of us are re-evaluating the cost of free software.

To read more, visit our blog at blog.sonatype.com.
Categories: Companies

Meet the Bees: Tracy Kennedy


At CloudBees, we have a lot of seriously talented developers. They work hard behind the scenes to keep the CloudBees continuous delivery solutions (both cloud and on-premise) up-to-date with all the latest and greatest technologies, gizmos and overall stuff that makes it easy for you to develop amazing software.
In this Meet the Bees post, we buzz over to our Richmond office to catch up with Tracy Kennedy, a solutions architect at CloudBees.

Tracy has a bit of an eccentric background. In college, she studied journalism and, in 2010, interned for the investigative unit of NBC Nightly News. She won a Hearst Award for a report she did about her state’s delegates browsing Facebook and shopping during one of the last legislative sessions of the season. She had several of her stories published in newspapers around the state. Sounds like the beginnings of a great journalistic career, right?

Well, by the time she graduated, Tracy ended up being completely burned out and very cynical about the news industry. Instead of trying to get a job in journalism, she wanted to make a career change.

Tracy's dad was a programmer and he offered to pay for her to study computer science in a post-bachelor’s program at her local university. He had wanted her to study computer science when she first started college, but idealistic Tracy wanted to first save the world with her hard-hitting reporting skills. She now took him up on his offer, and surprisingly, found she had a knack for technology.

Tracy landed a job at a small web development shop in Richmond as a QA and documentation contractor. The work tickled her journalistic skills as well as her newly budding computer science skills and she had a great opportunity to be mentored by some really talented web developers and other technical folks while she was there.

By the time Tracy felt ready to look for more permanent work, she had finished some hobby projects of her own that furthered her programming skills better than any class she had taken. It was also at that time that Mike Lambert, VP of Sales - Americas at CloudBees, was looking for someone with Tracy's skills and experience.
You can follow Tracy on Twitter: @Tracy_Kennedy
Who are you? What is your role at CloudBees? My name is Tracy Kennedy and I’m a solutions architect/sherpa at CloudBees.

My primary role is to reach out to customers on our continuous delivery cloud platform and assist them in on-boarding and learning how to use the platform to its fullest potential. However, I work on other things, too. My role actually varies wildly; it really just depends on what the current needs of the organization are.

Tracy with her dog Oliver.I’ve dabbled in some light marketing by writing emails for and sometimes creating customer communication campaigns, done lots of QA work when debugging our automated sherpa funnel campaign and do a bit of sales engineering, as well, since I’m physically located in the Richmond sales office. I also write some of our documentation as I find the time and identify the need for it.

Lately, I’ve also been spending a good chunk of my week working on updating our Jenkins training materials for use by our CloudBees Service Partners and laying the foundation for future sherpa outreach campaigns.

When those projects are done, I plan on going back to work on a Selenium bot that will automate a lot of my weekly tasks involving the collection of customer outreach statistics. I’m hoping that bot will give me more free time to spend learning about Jenkins Enterprise by CloudBees and Jenkins Operations Center by CloudBees - our on-premise Jenkins solutions, and to create some ClickStacks for RUN@cloud.

What makes CloudBees different from other PaaS and cloud computing companies?CloudBees has a really, really excellent "Jenkins story" as the business guys like to say, and that story is really almost like a Dr. Seuss book in its elegant simplicity. Ahem:Not only is Tracy a poet, but she is a budding actress!
Here she is as an extra in a Lifetime movie.

I can use Jenkins on DEV@cloud I can hide Jenkins from a crowd

I can load Jenkins to on-premise machines I can access Jenkins by many means

I can use Jenkins to group my jobs   I can use Jenkins to change templated gobs

I can use Jenkins to build mobile apps I can use Jenkins to check code for cracks

I can keep Jenkins up when a master is down I can “rent” slaves to Jenkins instances all around

I can use Jenkins here or there, I can use Jenkins anywhere.

Don’t worry; I have no plans on quitting my day job to become a poet laureate!

What are CloudBees customers like? What does a typical day look like for you?
CloudBees PaaS customers can range from university students to enterprise consultants. It’s also not uncommon to see old school web gurus open an account and “play around” with it in an attempt to understand this crazy new cloud/PaaS sensation.
I’ve even seen some non-computer science engineers on our platform who are just trying to learn how to program, and those are my favorite customers to interact with since they’re almost always very bright and seem to have an unparalleled respect for the art of creating web applications. It’s always a great delight to be able to “sherpa” them along on their web dev journey and to see them succeed as a result.
As for my typical day, I actually keep track of each of my days’ activities in a Google Calendar, so I can give you a pretty accurate timeline of my average day:

8:30 or 8:45 am - Roll into the Richmond office, grab some coffee. Start reading emails that I received overnight and start replying as needed. Check the engineering chat for any callouts to me and check Skype for any missed messages.

9:30 am - Either start responding to customer emails or start working on whatever the major project of the day is. If it’s something serious or due ASAP, I throw my headphones on to help me concentrate and tune out the sales calls going on around me.

12:00 pm - Lunch at my desk while I read articles on either arstechnica.com, theatlantic.com, or one of my local news sites.

1:00 pm - Usually by this point, someone will have asked me to review an email or answer a potential customer’s question, so this is when I start working on answering those requests. Tracy after doing the CrossFit workout "Cindy XXX."



3:00 pm - Start moving forward a non-urgent project by contacting the appropriate parties or doing the relevant research.

The end of my day varies depending on the day of the week:
  • Monday/Wednesday - 4:00 pm  - Leave to go to class
  • Tuesday/Thursday - 5 pm  - Leave for the gym
  • Friday - 5:30 pm  - Leave for home

Tracy's motorcycle: a 1979 Honda CM400In my spare time, video games are a fun escape for me and they give me a cheap way of tickling my desire to see new places. Sometimes I spend my Friday nights playing as a zombie-apocalypse survivor in DayZ and exploring a pseudo-Czech Republic with nothing but a fireman’s axe to protect me from the zombie hordes.

On the weekends I spend my time playing catch-up on chores, hanging out with my awesome and super-spoiled doggie and going on mini-adventures with my boyfriend. Richmond has a lot of really beautiful parks, and we hike through one of them each weekend if the weather’s conducive to it.

When I can get more spare time during the week, I plan on finishing restoring my motorcycle and actually riding it, renovating my home office into a gigantic closet for all of my shoes and girly things, and learning how to self-service my car.



What is your favorite form of social media and why?Twitter -- I enjoy the simplicity of it, how well it works even when my wi-fi or cellular data connection is terrible, and how easy it makes following my favorite news outlets.
Something we all have in common these days is the constant use of technology. What’s your favorite gadget and why?While I’d love to name some clever or obscure gadget that will blow everyone’s mind, the truth is that I’d be completely lost without my Android smartphone. I use it to manage my time via Google Calendar, check all 10 million of my email accounts with some ease and stay up to date on any breaking news events. Google Maps also keeps me from getting hopelessly lost when driving outside of my usual routes.
Favorite Game of Thrones character? Why is this character your favorite?Sansa Stark, Game of ThronesPlease note that book-wise I’m only on “Storm of Swords” and that I’m completely caught up on the HBO show, so I’m only naming my favorite character based on what I’ve seen and read so far. Some light spoilers below:

While I know she’s not the most popular character, I really like Sansa Stark. Sure, she’s not the typical heroine who wields swords or always does the right thing, but that’s part of her appeal to me. I like to root for the underdogs, and here we have this flawed teenager who’s struggling to survive her unwitting entanglement in an incredibly dangerous political game. She has no fighting skills, no political leverage beyond her name, and no true allies, and she’s trapped in a city with and by her psychopathic ex-fiancé whose favorite past time is to literally torture her.

The odds of Sansa surviving such a situation seem very slim, and yet despite her naïveté, she’s managing to do just that while the more conventional “heroes” of the story are dropping like flies. I could very well see her learning lessons from the fallen’s mistakes and applying them to any leadership roles she takes on in the future. Is she perhaps a future Queen of the North? I wouldn’t discount it.
Sansa is a bright girl with the right name and the right disposition to gracefully handle any misfortunates thrown her way, and aren’t grace, intelligence and a noble lineage all the right traits for a queen? I think so, but we’ll just have to see if George R.R. Martin agrees.
Categories: Companies

Open source, enterprise software increasingly synergistic

Kloctalk - Klocwork - Mon, 08/11/2014 - 15:18

As open source software continues to gain traction and adherents, many have predicted that such solutions will virtually replace enterprise software. However, as Information Age contributor Ben Rossi recently pointed out, this is not how the technologies in question are playing out. Instead, open source and enterprise software are increasingly working together synergistically to deliver superior results.

Let our powers combine
Rossi pointed out that when combined effectively, the union of open source and enterprise software has the potential to deliver the unique benefits of both approaches. Open source offers the advantage of community-wide efforts and more openness. Enterprise software, meanwhile, enables intellectual property protection, as well as greater accountability, the writer explained.

Rossi went on to explain that while there is no doubt that open source is having a transformative effect on the industry, it has not matured to the point where it can serve as a complete replacement for enterprise software. The latter is therefore still necessary to meet certain business needs.

Analytics example
To highlight this point, Rossi pointed to the extremely popular open source program Hadoop, which enables organizations to store, utilize and query large data sets without altering format. He noted that Hadoop is so popular that its name is now effectively synonymous with big data analytics.

Yet Rossi also emphasized the fact that Hadoop has a number of shortcomings. Most notably, it cannot deliver real-time insight. In order to get such speedy results, firms must combine Hadoop or other, similar open source software with enterprise Massively Parallel Process in-memory databases. The results that businesses can enjoy by using both of these resources collectively far outpace the outcome of an exclusive approach.

"A coupling with enterprise makes Hadoop a smarter, quicker, much friendlier beast, and businesses will undoubtedly have to marry the two if they want to remain agile and responsive to the demands on them," Rossi asserted.

Best practices needed
Similarly beneficial pairings between open source software and enterprise offerings are fairly commonplace. It is therefore detrimental for any firm to embrace an approach to software that relies entirely on one type of offering and wholly ignores the other.

This speaks to the overall need for a well-thought-out approach when it comes to open source solutions. These tools have the potential to radically improve a company's productivity, efficiency and effectiveness, but only when combined with best practices and steady strategy. Moving too quickly and recklessly to adopt open source software will significantly undermine the technology's utility.

A key example of such best practices is the adoption of technical support tools, such as scanning and governance solutions. These assets are essential for managing open source usage and ensuring license compliance. By incorporating automated scanning and other functions, these tools reduce the risk in open source deployments, thereby delivering the greatest possible value from the technology.

Categories: Companies

uTest to Live Tweet, Interview Speakers This Week From CAST 2014 in NYC

uTest - Mon, 08/11/2014 - 15:15

2014_CAST_squareAs a proud sponsor of the Association for Software Testing’s 9th Annual conference this week, CAST 2014, uTest will be in New York City through Wednesday covering all of the happenings and keynotes from this major (and now sold-out) testing event.

Beginning Tuesday here on the Blog, uTest will be providing daily video interviews with speakers from some of the conference’s sessions and keynotes as they leave the stage. Additionally, uTest will also be live-tweeting @uTest on Twitter, using the official event hashtag of #CAST2014 throughout the course of the conference’s full days on Tuesday and Wednesday.

This year’s theme is ‘The Art and Science of Testing,’ so conference speakers will share their stories and experiences surrounding software testing, whether bound by rules and laws of science and experimentation, or expressed through creativity, imagination, and artistry. Some of these esteemed speakers include:

  • James Bach
  • Michael Bolton
  • Fiona Charles
  • Anne-Marie Charrett
  • Matthew Heusser
  • Paul Holland
  • Henrik Andersson
  • Benjamin Yaroch

In addition to the live Tweets and video blogging, uTest will be providing a full recap later on in the week highlighting some of the best discussions, topics and happenings from the show.

Be sure to follow all of the coverage from CAST 2014 on the uTest Blog, and on Twitter @uTest and #CAST2014. And if you’re at the show this week, let us know in the comments below, or reply to us while we’re tweeting at one of the sessions or keynotes!

Categories: Companies

New HPC project aims to analyze climate change’s impact on water resources

Kloctalk - Klocwork - Sun, 08/10/2014 - 19:01

As organizations, both private and public, continue to expand the breadth and depth of their research, the utility of high performance computing solutions is growing. These resources are essential in countless areas, especially scientific disciplines.

The most recent example of this trend can be seen at the International Center for Biosaline Agriculture's Modeling and Monitoring Agriculture and Water Resources Development project in Dubai. As Engineering.com reported, the ICBA deployed a new HPC system to better support efforts to analyze the effects of climate change on water resources in the Middle East and North Africa.

A unified effort
The MAWRED project, according to the news source, is a joint effort funded by the U.S. Agency for International Development. To execute this project, the ICBA is working with NASA's Goddard Space Flight Centre, as well as a number of experts from American universities. These contributors will downscale climate data at regional and local scale before it is used as input for the NASA GSFC Land Information System models, which operate at the ICBA.

All of this will help the project leaders ascertain the possible impact that climate change will have on water resources and agriculture in the MENA region. With this information in hand, local government agencies and other public organizations will be equipped to make better decisions concerning their water conservation and agricultural plans going forward.

"The International Centre for Biosaline Agriculture undertakes incredibly important work to improve water management in the MENA region and help the most vulnerable in society. We are delighted to have been selected to implement an HPC solution as well as support their in-house team with security solutions and services," said Dave Brooke, general manager for Dell Middle East, the news source reported.

Technology is key
The news source noted that the ICBA is a world-class research organization, consisting of teams of leading scientific experts. However, as Dr. Rachael McDonnell, a water policy and governance scientist with the ICBA, pointed out, technology – and specifically HPC – is key to the organization's efforts.

"Technology is absolutely essential to our ability to deliver information to governments and public bodies which potentially leads to life-changing results," said McDonnell, Engineering.com reported. "The implementation of Dell's HPC solution is key to our ability to analyze vast amounts data which can be used to improve the lives of people in the MENA region."

HPC and the climate
This is not the first time that HPC solutions have been applied to the increasingly important issue of climate change. For example, the National Center for Atmospheric Research uses these tools to analyze massive amounts of weather-related data to identify patterns that would otherwise remain invisible.

Speaking to SiliconANGLE's the CUBE at the IBM Edge conference held in Las Vegas in May, Pamela Gillman, manager of data analysis services groups for the NCAR, noted that her team leverages HPC to examine the climate during the Paleolithic era, as a means of exploring climate change over long stretches of time and applying this insight to today. Additionally, the group examines data for purposes of better identifying upcoming hurricanes.

Gillman explained that HPC resources are essential for gaining value from the tremendous amounts of climate data gathered. Prior to the implementation of HPC solutions, the NCAR struggled to effectively wield all of this raw information.

As more organizations increase their focus on climate science and HPC becomes more sophisticated and powerful, it’s incumbent for development teams to choose tools that simplify the process of making these apps stable and efficient. Choosing a dynamic source code debugger that handles multiple processes and threads, for example, helps developers understand what’s going on within the app to better deliver analysis results that are accurate and reliable.

Categories: Companies

Community Update 2014-06-04 ASP.NET vNext, @CanadianJames MVC Bootstrap series and what we learned from C++

Decaying Code - Maxime Rouiller - Sat, 08/09/2014 - 05:00

So the big news is that Visual Studio 14 actually reached CTP. Of course, this is not the final name and is very temporary.

If you want to install it, I suggest booting up a VM locally or on Windows Azure.

Enjoy!

Visual Studio “14”

Visual Studio "14" CTP Downloads (www.visualstudio.com)

Announcing web features in Visual Studio “14” CTP (blogs.msdn.com)

Visual Studio "14" CTP (blogs.msdn.com)

ASP.NET vNext in Visual Studio “14” CTP (blogs.msdn.com)

Morten Anderson - ASP.NET vNext is now in Visual Studio (www.mortenanderson.net)

James Chambers MVC/Bootstrap Series

Day 4: Making a Page Worth a Visit | They Call Me Mister James (jameschambers.com)

Web Development

To Node.js Or Not To Node.js | Haney Codes .NET (www.haneycodes.net)

ASP.NET

aburakab/ASP-MVC-Tooltip-Validation · GitHub (github.com) – Translate MVC errors to Bootstrap notification

Download Microsoft Anti-Cross Site Scripting Library V4.3 from Official Microsoft Download Center (www.microsoft.com)

ASP.NET Web API parameter binding part 1 - Understanding binding from URI (www.strathweb.com)

Cutting Edge - External Authentication with ASP.NET Identity (msdn.microsoft.com)

Forcing WebApi controllers to output JSON (blog.bjerner.dk)

Videos

What – if anything – have we learned from C++? (channel9.msdn.com)

Search Engine

Elasticsearch.org Elasticsearch 1.2.1 Released | Blog | Elasticsearch (www.elasticsearch.org)

Elasticsearch.org Marvel 1.2 Released | Blog | Elasticsearch (www.elasticsearch.org)

Dealing with human language (www.elasticsearch.org)

Categories: Blogs

Recap: Fearless Browser Test Automation [WEBINAR]

Sauce Labs - Sat, 08/09/2014 - 01:06

john_david_daltonThanks to those of you who attended our last webinar, Fearless Browser Test Automation, featuring John-David Dalton. This webinar was presented by O’Reilly and Sauce Labs, a provider of the world’s largest automation cloud for testing web and native/hybrid mobile applications.

We hope you found John-David’s perspectives helpful, and that if you’re now doing manual testing on a limited range of browsers – or no testing at all – you’re ready for the awesomeness of automated cross-browser testing.

Missed the webinar? You can watch it in its entirety HERE.

Still scared? Never fear: you can get more tips and tools at the Sauce Labs Documentation Center. If you’re new to JavaScript testing, here are some resources to get you started.

Lastly, please follow our friends at O’Reilly at @oreillymedia, Sauce Labs at @saucelabs, and John-David at @jdalton to keep up with the latest, and feel free to share this webinar using the hashtag #fearlesstesting.

Categories: Companies

Make it easier to do the right thing

IBM UrbanCode - Release And Deploy - Fri, 08/08/2014 - 22:01

Alexandra Spillane and Matt Callanan from Wotif shared their journey from slower, infrequent releases to more of a continuous delivery approach at DevOps Days Brisbane 2014.

They hit on a number of the topics that I think are central to DevOps and process change in general. They observe that people generally want to do the right thing (characteristic 4 of people in Alister Cockburn’s wonderful article) and that people generally do what’s easiest. So there’s your fundamental dilemma in process and tools. How do you make doing the right thing, the easiest choice?

DevOpsDays Brisbane 2014 – Alexandra Spillane and Matt Callanan – Making Easy = Right from devopsdays on Vimeo.

Anyway, this presentation hits a topic that is near and dear to my heart and I hope you check it out. After all, UrbanCode only got into product development when a developer/cofounder shipped a build to a customer that had code in it that wasn’t in source control. Another developer was promptly assigned the task of making it easier to create shippable builds from source control, than on your desktop. The Anthill build server was born. We even used “Making it easy to do the right thing” as a marketing tag-line for awhile. So there you go, if you want people to follow a process make it easier to follow the process, than not. If you can also make it rewarding, you’ll have something addictive.

Categories: Companies

The Role of a Test Architect

I get some really good questions by e-mail and this is one of them.

Q: What does a QA Architect do in a team, and what skills are needed for this job?

A; To me, (although different organizations will have different definitions) the test architect is a person who understands testing at a very in-depth level and is responsible for things such as:
  • Designing test frameworks for test automation and testing in general
  • Directing and coordinating the implementation of test automation and other test tools
  • Designing test environments
  • Providing guidance on the selection of the most effective test design techniques in a given situation
  • Providing guidance on technical types of testing such as performance testing and security testing
  • Designing methods for the creation of test data
  • Coordinating testing with release processes
This is a "big picture" person who also understands software development and can work with developers to ensure that the test approaches align with development approaches. So, the test architect should be able to understand various project life cycles. These days, a test architect needs to understand the cloud, SOA and mobile computing.

The terms "architect" in many organizations implies a very deep level of understanding and is a highly respected position. The expectations are pretty high. The architect can provide guidance on things that the test manager may not have technical knowledge about. Yet, the test architect can focus on testing and not have to deal with administrative things that managers deal with.

Follow-up question: "What kind of company is more likely to have such a role? A large company or a smaller company? When you say "directing and coordinating" sounds like communicating across teams, like QA, dev, dev ops, DBA to get things done."

A: I would say that you would likely find the role in companies that truly understand the role and value of testing. For example, they know that QA does not equal testing. It would be an interesting project to research how many companies in a sample group would have test architect as a defined role. I would tend to think of larger companies being more likely to have a test architect, but I've seen smaller software companies with test architects and larger companies with no one in any type of test leadership or designer role.

Another indication might be those companies that have a more centralized testing function, such as testing center of excellence. I have some misgivings about the COE approach in that they often fail because people see them as little bureaucracies instead of support. Also, they tend to try to "take over the world" in a company in terms of test practice. The lifespan of a testing COE is often less than 2 years from what I have seen. It's good money for testing consultants to come in and establish the COE, but then they leave (or get asked to leave) and the energy/interest goes away.

And...the company would need to see the need for both functional and technical testing. You need a test architect to put those pieces together.

This is not "command and control" but rather design and facilitation. And you are right, the test architect role touches many other areas, tasks and people.
Follow-up Question: What kind of tools? I'm assuming you're talking more than handy shell scripts. Simulators? Rest clients like postman customizations?
A: Right, the tools are typically high-end commercial tools, but there is a trend toward open source tools and frameworks. One of my friends calls the big commercial tool approach "Big Pharma". The key is that the test architect knows how to define a framework where the tools can work together. This can include the customized homegrown tools as well. Those can be handy.

By the way, the term "framework" can have many meanings. The way I'm using the term here is a structure for test design and test automation that allows functional testers to focus on understanding the application under test and design (and implement) good tests, with the technical testers building and maintaining the technical side of automation.

We also have to expand the view to include not only test automation, but other support tools as well, such as incident management, configuration management and others. For example, there is a great opportunity to use tools to greatly reduce the time needed for test design. Tools such as ACTS (from nist.gov) and Hexawise can be helpful for combinatorial testing, test data generators are needed for test automation and performance testing, and I'm especially keen on model-based testing when the model is used to generate tests (not the manual approach). I think BenderRBT does a great job in designing tests from requirements. Grid Tools has recently introduced a tool called Agile Designer to design tests based on workflow. I'll have more information on that in an upcoming post.

What Does it Take to Become a Test Architect?

I suppose one could take many paths. However, I would not automatically assume someone in an SDET (Software Developer in Test) would be qualified. That's because more than just the technical perspective is needed. The test architect also needs to understand the business processes used in the organization. Personally, I think people at this level, or who aspire to it, would profit by studying Enterprise Architecture. My favorite is the Zachman Enterprise Architecture Framework.

I would look for:

1. Knowledge and understanding at a deep level - Not the superficial knowledge that most people never get past. This means that they know about metrics, code complexity, reliability modeling, performance modeling, security vulnerabilities - all at an advanced to expert level. This is why I encourage people who are on the certification path to go beyond foundation and on to advanced and expert levels of training. This also includes being a continuous learner and reading good books in the testing field and related disciplines. I would start with Boris Beizer's "Software Testing Techniques, 2nd Ed."

2. Meaningful experience - At the risk of just putting an arbitrary number of years as the baseline, I would think at least eight to ten years of solid test design, tool application and perhaps software design and development would be needed. You need a decent portfolio of work to show you have what it takes to work in the role.

3. Great interpersonal skills - The test architect has to negotiate at times and exert influence to get their ideas across. They have to get along with others to function as part of the larger organizational team. Of course, this also includes developers, test managers and development architects. Just because you are a guru doesn't mean you have to be a stubborn and contentious jerk.

4. Objectivity - When choosing between alternative approaches and tools, objectivity is needed.

5. Problem Solving - This requires a creative approach to solving problems and seeing challenges from different angles. It's not uncommon for solutions to be devised where no one has gone before.

I hope this helps raise the awareness of this important role.

Questions or comments? I'm happy to address them!

Randy



Categories: Blogs

Mobile Load: Performance Testing for Mobile Applications

HP LoadRunner and Performance Center Blog - Fri, 08/08/2014 - 19:15

mobile load testing.jpgMobile applications are everywhere. They have infiltrated every aspect of our lives. They are much more than simply a trend, they empower us to do more and be more.

 

But these applications pose a challenge for developers. There are multiple devices and platforms that all have different performance requirements.

 

Keep reading to find out what developers can do to make sure the performance of their applications is satisfactory on multiple devices.

Categories: Companies

Have You Hugged a Tester Today?

uTest - Fri, 08/08/2014 - 17:08
photo

Members of uTest Community Management give uTest/Applause QA Manager Bryan Raffetto a long overdue embrace.

So I hear a lot about hugging developers. ‘Have you hugged a developer today?’

In a recent video from the good folks at Smartbear, in fact, software testing consultant Dawn Haynes said, “Why don’t you buy a developer a doughnut? You know, make friends and give people positive feedback as well, not just only the negative.”

And I don’t have anything against this. In fact, developers are lovely people who have to put up with a lot themselves. My only gripe is that the testers aren’t usually the ones getting these bountiful gifts of doughnuts and hugs.

Until today. The Community Management Team at uTest decided it was about time that a tester got some hugs, so we trekked from the 5th floor penthouse at the Applause/uTest HQ down to the 4th floor and rectified this immediately, embracing our in-house Applause and uTest QA Manager Bryan Raffetto. Needless to say, love was in the air. It’s about time someone hugged a tester. If anyone knows the hardships a tester must endure and can empathize, it’s the CM team.

So have YOU been hugged today? If not, be sure to hug a fellow testing colleague. Maybe they’ll return the favor.

If you’re also feeling adventurous in spreading the tester love, feel free to tag a picture on Twitter with #hugatester. Maybe you’ll end up on the uTest Blog!

Categories: Companies

HP Webinar: ABCs for Performance Testing Series I

HP LoadRunner and Performance Center Blog - Fri, 08/08/2014 - 17:00

ABC blocks.pngPerformance testing is not just about creating scripts and executing the tests. Once you understand this fact, you begin to look deeper into the rationality behind how to design, execute and understand the results.

 

Keep reading to find out how you can join us for a series of webinars and learn how to gain this understanding.

Categories: Companies

Heartbleed may ultimately prove beneficial for open source security

Kloctalk - Klocwork - Fri, 08/08/2014 - 15:00

In the months since the Heartbleed vulnerability was discovered, a great deal of panic has gripped the open source community. This is understandable, considering the fact that Heartbleed was a major flaw in OpenSSL, one of the most popular open source software solutions in the world, and put a huge amount of information at risk of loss or exposure. Yet despite this panic, many claimed that Heartbleed would not end up having much of an impact on the open source sector, instead arguing that this was a one-time, fluke incident.

Going even further, International Business Times contributor Joram Borenstein recently asserted that the discovery and response to Heartbleed will actually prove to be beneficial for the cause of open source security.

A new focus
The greatest significance of Heartbleed, according to Borenstein, will most likely ultimately be the way it changed both IT security professionals' and the general public's understanding of computing and its risks.

"Even now, as the second Heartbleed-related vulnerability was discovered in early June, the initial incident still remains the focus of specific sectors like tech and information security and their respective energies, discussions and concerns about the future of computing infrastructure, mobile applications and personal data protection," Borenstein wrote.

Heartbleed benefits
Perhaps even more importantly, the discovery of Heartbleed has convinced many organizations that they need to invest their efforts and money into open source security efforts in order to guarantee their assets remain safe into the future.

Specifically, Borenstein pointed to the recently announced creation of the Core Infrastructure Initiative, an organization that aims to fund critical open source projects. The writer argued that, if not for Heartbleed, the CII would likely have received virtually no attention and no additional funding to support its mission. Yet in the wake of Heartbleed, many major companies committed to donating to this cause. These firms include Amazon, Adobe, Google, Microsoft, IBM, Facebook and others.

With few exceptions, many of these companies undoubtedly rely on open source tools throughout their organizations, making open source security an issue of vital importance. But until Heartbleed occurred, these companies and many others failed to appreciate the need for a greater level of investment and more attention dedicated to security concerns throughout the open source community.

"This level of focus and interest is a good thing for our collective security and for the broader integrity of the computing landscape upon which we rely so heavily," Borenstein asserted.

A critical eye
This newfound appreciation of the importance of open source software and, consequently, the security of these solutions is leading many organizations to re-evaluate their approach to this technology.

Writing for TechTarget, industry expert Michael Cobb argued that the single most significant lesson that enterprise should take away from the Heartbleed incident is that they need to cast a more critical eye on their own open source practices. The only reason that Heartbleed became such a serious issue in the first place is because countless decision-makers simply trust that the software they used was secure, without doing their own verifications. Everyone assumed that someone else had done this work, and yet no one actually did.

According to Cobb, companies must establish security teams that test code or components to ensure that they are secure, rather than relying on generally accepted standards. By developing a mature community with definitive policies in place, organizations can utilize open source solutions without risking their sensitive assets.

Categories: Companies

Spotlighting Important Data in TestTrack List Windows

The Seapine View - Thu, 08/07/2014 - 23:27

We know your team has a lot of data in TestTrack and sometimes (often times?) it’s hard to wade through all of that text-based information to find what’s really important in the moment. The new Field Value Styles in TestTrack 2014.1 allow you to use colors, icons and different fonts to differentiate information and work with large amounts of data faster.

There are two components to using the field value styles. First you create a style and then you apply that style to certain types of data.

Create a Field Value Style

To get started go to Tools > Administration > Field Value Styles. If the menu option is grayed out, check Administration permissions within your security group. Click Add to create a new style. In this example I’ve created a style called Passed that does 3 things.

  1. Changes the text color to green
  2. Bolds the text
  3. Places an icon before the text

The icon I used here is installed with TestTrack in the workflowicons folder, which is in the client installation directory. You can also use your own icons as long as they’re 16×16 pixels.

CreateFieldStyle

Apply a Field Value Style

Now that you have a Field Style, you can start applying it to different fields. For this example, I’m going to apply my Passed style to the workflow status of test runs. To do that, go to Tools > Administration > Workflow and select Test Runs from the drop-down menu. Then edit the Passed state and set the Style drop-down to the Passed style. You can set the style for any workflow state on any item type, as well as any general or custom drop-down field. For general and custom fields, go to Tools > Configure List Values > Field name to assign a style to a value.

UseFieldStyle

Here’s the new field styles in action on the Test Runs list window.

ViewFieldStlye

Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

Web Performance QA Tester and Load Tester 6.3 Released

Web Performance Center Reports - Thu, 08/07/2014 - 23:06
If you were wondering why there’s a 6.3 release only a few weeks after the 6.2 release, its because we’re on a new development schedule.  Instead of holding back new features for months and only putting out new releases a couple of times a year, we’re moving to releases everyone 1-2 months, getting the new stuff and bug fixes into your hands as quickly as possible.  This fits in nicely with the new monthly subscription model for Web Performance QA Tester™, where the small monthly fee covers not just support but new features month after month.  If you … Continue reading »Related Posts:
Categories: Companies

Knowledge Sharing

Telerik Test Studio is all-in-one testing solution that makes software testing easy. SpiraTest is the most powerful and affordable test management solution on the market today