Skip to content

Feed aggregator

The value proposition of Hypermedia

Jimmy Bogard - 1 hour 17 min ago

REST is a well-defined architectural style, and despite many misuses of the term towards general Web APIs, can be a very powerful tool. One of the constraints of a REST architecture is HATEOAS, which describes the use of Hypermedia as a means of navigating resources and manipulating state.

It’s not a particularly difficult concept to understand, but it’s quite a bit more difficult to choose and implement a hypermedia strategy. The obvious example of hypermedia is HTML, but even it has its limitations.

But first, when is REST, and in particular, hypermedia important?

For the vast majority of Web APIs, hypermedia is not only inappropriate, but complete overkill. Hypermedia, as part of a self-descriptive message, includes descriptions on:

  • Who I am
  • What you can do with me
  • How you can manipulate my state
  • What resources are related to me
  • How those resources are related to me
  • How to get to resources related to me

In a typical web application, client (HTML + JavaScript + CSS) are developed and deployed at the same time as the server (HTTP endpoints). Because of this acceptable coupling, the client can “know” all the ways to navigate relationships, manipulate state and so on. There’s no downside to this coupling, since the entire app is built and deployed together, and the same application that serves the HTTP endpoints also serves up the client:

image

For clients whose logic and behavior are served by the same endpoint as the original server, there’s little to no value in hypermedia. In fact, it adds a lot of work, both in the server API, where your messages now need to be self-descriptive, and in the client, where you need to build behavior around interpreting self-descriptive messages.

Disjointed client/server deployments

Where hypermedia really shines are in cases where clients and servers are developed and deployed separately. If client releases aren’t in line with server releases, we need to decouple our communication. One option is to simply build a well-defined protocol, and don’t break it.

That works well in cases where you can define your API very well, and commit to not breaking future clients. This is the approach the Azure Web API takes. It also works well when your API is not meant to be immediately consumed by human interaction – machines are rather lousy at understanding following links, relations and so on. Search crawlers can click links well, but when it comes to manipulating state through forms, they don’t work so well (or work too well, and we build CAPTCHAs).

No, hypermedia shines in cases where we the API is built for immediate human interaction, and clients are built and served completely decoupled from the server. A couple of cases could be:

image

Deployment to an app store can take days to weeks, and even then you’re not guaranteed to have all your clients at the same app version:

image

Or perhaps it’s the actual API server that’s deployed to your customers, and you consume their APIs at different versions:

image

These are the cases where hypermedia shines. But to do so, you need to build generic components on the client app to interpret self-describing messages. Consider Collection+JSON:

{ "collection" :
  {
    "version" : "1.0",
    "href" : "http://example.org/friends/",
    
    "links" : [
      {"rel" : "feed", "href" : "http://example.org/friends/rss"},
      {"rel" : "queries", "href" : "http://example.org/friends/?queries"},
      {"rel" : "template", "href" : "http://example.org/friends/?template"}
    ],
    
    "items" : [
      {
        "href" : "http://example.org/friends/jdoe",
        "data" : [
          {"name" : "full-name", "value" : "J. Doe", "prompt" : "Full Name"},
          {"name" : "email", "value" : "jdoe@example.org", "prompt" : "Email"}
        ],
        "links" : [
          {"rel" : "blog", "href" : "http://examples.org/blogs/jdoe", "prompt" : "Blog"},
          {"rel" : "avatar", "href" : "http://examples.org/images/jdoe", "prompt" : "Avatar", "render" : "image"}
        ]
      }
    ]
  } 
}

Interpreting this, I can build a list of links for this item, and build the text output and labels. Want to change the label shown to the end user? Just change the “prompt” value, and your text label is changed. Want to support internationalization? Easy, just handle this on the server side. Want to provide additional links? Just add new links in the “links” array, and your client can automatically build them out.

In one recent application, we built a client API that automatically followed first-level item collection links and displayed the results as a “master-detail” view. A newer version of the API that added a new child collection didn’t require any change to the client – the new table automatically showed up because we made the generic client controls hypermedia-aware.

This did require an investment in our clients, but it was a small price to pay to allow clients to react to the server API, instead of having their implementation coupled to an understanding of the API that could be out-of-date, or just wrong.

The rich hypermedia formats are quite numerous now:

The real challenge is building clients that can interpret these formats. In my experience, we don’t really need a generic solution for interaction, but rather individual components (links, forms, etc.) The client still needs to have some “understanding” of the server, but these can instead be in the form of metadata rather than hard-coded understanding of raw JSON.

Ultimately, hypermedia matters, but in far fewer places than are today incorrectly labeled with a “RESTful API”, but is not entirely vaporware or astronaut architecture. It’s somewhere in the middle, and like many nascent architectures (SOA, Microservices, Reactive), it will take a few iterations to nail down the appropriate scenarios, patterns and practices.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

TDD with AngularJS and TypeScript

Testing TV - 6 hours 32 min ago
Writing clean, testable JavaScript can be a daunting task for front-end developers. Many find it difficult to get into and thus discard it. Test automation is an essential part of modern web applications, especially when it comes to maintainability. In this talk, I show how easy and straightforward testable JavaScript code can be written using […]
Categories: Blogs

Visit Ranorex at Software Test Professionals Conference 2014

Ranorex - 7 hours 30 min ago
Ranorex will participate in the Software Test Professionals Conference at Denver, Colorado from 3rd to 6th November, 2014.

The Software Test Professionals Conference is the leading event where test leadership, management and strategy converge.
The hottest topics in the industry are covered including agile testing, performance testing, test automation, mobile application testing, and test team leadership and management. 

Attending this conference will help you meet your professional career goals and give you the opportunity to
  • improve your software testing techniques
  • find the latest tools
  • discover emerging trends
  • develop new or improve existing processes
  • network and gather with other high-level professionals
  • and gain industry insight you won’t find anywhere else.
Don't miss the session "How Manual Testers Can Break into Test Automation" presented by our own Jim Trentadue.

Categories: Companies

Testing Search

Yet another bloody blog - Mark Crowther - Tue, 09/23/2014 - 08:52
I got thinking about Search testing the other day and ended up waking at 4am to scribble a mind-map of thoughts before I lost them. As you do.

The main thought was how Search testing traverses the three main layers of testing we typically consider.

* Where - the web front end in which the user builds their search queries
* How - the Search 'engine' that does the work of polling the available data
* What - the data that is being searched


Target of the search
Before any testing commences, we need to understand as fully as possible what it is that can be searched for. To do that we need to be clear on the data that is the target for the search. Though reasonable to expect the searched for data to be static in a database, the returned results might also be dynamically generated by the search query. Not all data returned may be under the control of the business, remember the search may also be fed by external data.

- What data is the user searching for? (e.g. products, account records, flights, ...)
- What data sets are available? (e.g. product attributes, transactions/payment types, current/future flights, ...)
- What is the source of the data? (e.g. static data, dynamically created data, external sources, ...)


The Search Engine
The most informative way of learning this is via the API documentation for the search engine. With this we'll know what can be passed to the engine and so shape the scope and structure of allowable queries. Some good public examples of Search APIs are those for Google (https://developers.google.com/custom-search/docs/overview) and Twitter (https://dev.twitter.com/rest/public/search):

If your development team can't provide the API docs, ask for access to the Java docs, Unit Tests or whatever else can inform you about the implementation specifics. In my experience, the design of search is rarely sufficiently detailed enough for testers, in specification documents or requirements statements. Indeed in a more agilistic setting it's very likely this detail is closer to the code - get sauce! You can't properly test the implementation from just the original specification.

To counter the obvious challenge here of 'you don't need to worry about that... just run your tests... confirm the requirements have been met... ', as part of the implementation team you are not restricted to just looking at the acceptance tests or requirements. As part of the implementation team, in the tester role, you are working with equality with all other roles and so the code, unit tests, etc. are not 'off limits', just as your tests and automation code is not off-limits to anyone. If you are told they are, then you are not in an agile team or whoever is saying this needs to get off the agile team. (Whether you can understand the code, tests, etc. or not is another matter)


Building Queries
Our next concern close to the user, is the UI and how it allows the search queries to be crafted.

- How can a search query be entered? (free text, drop-down, ...)
- How can search options be used? (Boolean, switches, ...)

Now that we have knowledge of what the search functionality actually is, as a component of the system, let's think about what testing will be needed.

Functional
We clearly have out typical functional testing, that will include submitting search queries. However, we need to break that down too, to ensure we're clear about what we're actually testing.

As we have a UI, we'll need to test the functionality a user is provided to build a search query. This might be a simple free-text field like Google, where the user just enters whatever text they want with no switched, drop-downs or options. Be aware this can have hidden nuances too though. At first glance Google search functionality is just a text field, but in fact we have a bunch of ways to structure our search query.

For example, you can enter 'define: testing' to get a dictionary definition or search for files in a given directory, try '-inurl:htm -inurl:html intitle:"index of" + ("/secret")' to see how not to hide your password files and pictures of your ex girlfriend. Don't do that search in work by the way! If you've reviewed the API docs or something similar you should know if the above types of searches are available to you.

For searches constructed by selecting from drop-down boxes, using radio buttons, etc. it'll be more apparent what choices you have. Again, be careful to understand where the data in drop-downs for example is coming from. As always, view source. Is that drop-down populated via an Ajax call, a fixed list in the HTML or a list from another JavaScript? Depending on how those selectable search options can be chosen, it will affect the specific test cases that will be possible. Remember to do some equivalence partitioning where any lists are concerned, like other tests it's highly unlikely you'll need to test all combinations.

Obvious initial tests will be data entry, using valid and invalid inputs, leading spaces, special characters and all the other standard cheat-sheet heuristics. However, we need to be careful here as this is more likely form-field input validation, which is not search testing. Be sure again to view source and see where that validation is taking place, client or server side. Hopefully it's not an embedded JavaScript, check source and if you see a src=validation sounding JavaScript name, save a local copy and inspect it. Oh wait, you don't need to do that because other members of your implementation team have shared these items via your CVS / Git / etc. and you can review them.

Accuracy
We've all experienced using a search engine and getting a result that is nothing like what we were after. The underlying challenge here is a code and engineering problem, but it's part of our job to show how inaccurate results are. The definition of this will likely be a combination of referring to requirements and our experience / gut-feel. When we use search engines ourselves, we'll often get results that are technically correct and yet wrong. It'll be a bunch of blog posts on a given topic where half are barely relevant or a search for 'coco' that uncovers a family (un)friendly set of pictures of a lady looking rather 'distorted'. 

Consistency
When we perform a search it's reasonable to expect the results will be the same if the underlying data and search engine logic are the same. This will form the basis for some of the regression testing we'll want to conduct over successive releases.
However, there are occasions when the same search string will bring back different results.

* The search database is replicated and data varies between the databases
* Data has changed since we last run the query

This should be noticed due to a problem with search result consistency. Either the search string will be consistently different than was returned in a previous test run or it will sometimes be different. Where data is different consistently, we just need to validate this is as expected then update our script. For results that should be the same but are sometimes as expected and at other times not, we need to look at where the query is going. A common problem for search consistency is the combination of data replication across multiple data-bases and routing due to load balancing. When we conduct our search, we might not be 100% certain as to what data sets we are hitting and which server our query is hitting. These are questions to take up with whoever is fulfilling the Dev Ops / Infrastructure role in the team. We need to understand the data replication process, are all servers copied to at the same time or in some kind of order? Is there a back-up process that takes servers off-line when we might be testing?

For all of the above we could be using a simple tool such as Selenium with a dashboard to show results. Selenium allows us to run the tests in a loop, vary the query, save down results files, etc. What i would not use it for, although I know it does get used for this, is performance.

Performance
Another aspect of search is the speed at which we get back our search results. We'd expect this to vary but not by much, usual changes in network traffic and resources on machines are fine to a degree. When we start to see notable slowdowns then we need to investigate. To help with testing of performance use a performance testing tool, as above, Selenium ain't it. Grab a copy of JMeter if you're working with open source tools. It should be an easy matter to replicate your test in JMeter and build them out into a test plan that let's your performance test search results.

In closing
In this post we've looked at the fact search testing is not just putting a few search strings into a text field and reviewing the results don't look wrong. We have the full scope of functional and performance, along with data accuracy and consistency to consider. To test thoroughly we need sight of the requirements but also the code or API docs and an understanding of the network infrastructure that's in place.


Mark

Categories: Blogs

Special on ISTQB Foundation Level e-Learning - This Week Only!

I just wanted to let you know about a special offer that is for this week only.

Before I go into that, I want you to know why I'm making this offer.

First, I know that many people have stretched training budgets and have to make every dollar count.
Second, I have over 15 e-learning courses in software testing, but the one that I thinks covers the
most information in software testing is the ISTQB Foundation Level course.

The reason I conduct and promote the ISTQB program is because it gives a well-rounded framework for building knowledge in software testing. It's great to get your team on the same page in terms
of testing terminology. It also builds credibility for testers in an organization.

ISTQB is the International Software Test Qualifications Board, a not-for-profit organization that has defined the "ISTQB® Certified Tester" program that has become the world-wide leader in the certification of competences in software testing. Over 336,000 people worldwide have been certified in this program. The ISTQB® is an organization based on volunteer work by hundreds of international testing experts, including myself.

You can learn more at www.istqb.org and about the ASTQB (American Software Testing Qualifications Board) at www.astqb.org.


I think e-Learning has the best results in preparing people to take the ISTQB Foundation Level exam because you have time to really absorb the concepts, as opposed to trying to learn everything in 3
or 4 days. Plus, you can review the material at any time. That's hard to do in a live class. I have seen people score very high on the exam after taking this course and it gets great reviews.

OK.... now for the details....

For this week only, Monday, Sept 22 through Midnight (CDT) Friday, Sept 26th I am running a special offer on ISTQB Foundation Level certification e-learning training. If you purchase the 5-team
license, you get an extra person at no extra cost - exams included!

So, if you have been thinking about getting your team certified in software testing, this is a great opportunity - an $899 value.

In addition, if you order a 5-team license or higher, I will conduct a private one-hour web meeting Q&A session with me in advance of taking the exam. Your team also gets access to the course as
long as they need it - no time limits!

All you have to do is use the code "ISTQB9922" at checkout time. You will be contacted for the names of the participants.

Payment must be by credit card or PayPal.

To see the details of the course, go to
http://riceconsulting.com/home/index.php/ISTQB-Training-for-Software-Tester-Certification/istqb-foundation-level-course-in-software-testing.html

To learn more about the e-learning program, go to
http://riceconsulting.com/home/index.php/Table/e-Learning-in-Software-Testing/.

To register, go to https://www.mysoftwaretesting.com/ISTQB_Foundation_Level_Course_in_Software_Testing_p/istqb5.htm

Any questions? Just respond to this e-mail.

Act fast, because this deal goes away at Midnight (CDT) Friday,
Sept 26th!

Thanks!
Categories: Blogs

Meet the uTesters: Pablo Baxter

uTest - Mon, 09/22/2014 - 19:36

Pablo Baxter is a Gold-rated tester on Paid Projects at uTest, and former Forums moderator, hailing from the United States. He is a U.S. Air Force veteran pursuing unnamedhis computer science degree at the University of California – San Diego. 

Pablo enjoys beautiful and sunny San Diego with his wife and two daughters, and his work background includes experience in IT/tech support, firefighting, aircraft maintenance, computer programming, and software testing. In that order!

Be sure to also follow Pablo’s profile on uTest as well so you can stay up to date with his activity in the community!

uTest: What drew you into testing initially? What’s kept you at it?

Pablo: I had some expensive hobbies my wife did not want to support (gaming, beer, computers, etc.), so when I found uTest, I figured I could make enough from it to support my own hobbies and then stop. However, after I started really getting into the testing groove, I just loved it. The community, the interactions, and the opportunity to really dig deep and find these bugs (I call it the thrill of the hunt) have been the main reasons I have kept coming back. The money is now secondary for me, but I still like being paid!

uTest: What’s your go-to gadget?

Pablo: My desktop computer. I know it’s not quite a gadget (not easy to carry around a desktop), but as a student programmer and tester, I find myself sitting at my desk working with my computer more often than with any other device I have at my disposal. Even when I am working solely on a mobile device, I am right next to my computer, monitoring logs, checking uTest Forums chat, and having some music or a movie playing.

uTest: What is the one tool you use as a tester that you couldn’t live without?

Pablo: Jing. It has been invaluable for presenting those bugs that are difficult to explain in text. If the customer is looking for just screenshots, Jing allows me to take them, select what part of the screen to capture from, and add text to (or highlight) the focal area as well. Mainly, I use it to capture a screencast so I can explain my thoughts or steps as I go through reproducing the bug.

uTest: What’s your favorite part of being in the uTest Community?

Pablo: The most amazing part of being in the uTest Community is that I get to work, and interact with, people from all over the world! Just that alone is worth being part of uTest.

uTest: What keeps you busy outside testing?

Pablo: As a husband, a father of two, and a student, I am kept very busy when I’m not testing. I am hoping to achieve a degree in Computer Science from the University of California, San Diego, and then possibly convert to the dark side (programming).

Categories: Companies

How to Perform Impact Analysis with TestTrack

The Seapine View - Mon, 09/22/2014 - 17:00

TestTrack’s impact analysis tools take the guesswork out of understanding and approving requirement changes. You can quickly understand the scope of changes in the context of the entire project and make better choices about which changes to approve.

To perform impact analysis on a specific requirement, open the requirement and click the Traceability tab. Click Impact Analysis and then select the option for Forward Impact, Backward Impact, or both.

Impact analysis is available on the requirement Traceability tab.

Impact analysis is available on the requirement Traceability tab. (Click to enlarge.)

Requirements that are related in a requirement document or linked to each other are displayed, as well as linked test cases, test runs and defects.

Detailed information is displayed for each dependent item to help you determine the item’s status and view more about its relationship with the requirement.

Impact analysis displays detailed information about requirement relationships. (Click to enlarge.)

Impact analysis displays detailed information about requirement relationships. (Click to enlarge.)

Forward and backward impact analysis both display directly and indirectly impacted items. The following table includes the items that are displayed for each type of impact analysis.

Impact analysis Impact type Displays: Forward impact Direct Child requirements one level down in the requirement document hierarchyItems with child or peer links to the requirement Indirect Items with child or peer links to the directly impacted items Backward impact Direct Parent requirements one level up in the requirement document hierarchyItems with parent or peer links to the requirement Indirect Items with peer or parent links to the directly impacted items

The following example shows the table of contents for a requirement document. Notice the relationships that FR-25 has. It is the parent requirement of requirements FR-20, FR-26, and FR-21.

Requirement relationships

Requirement relationships are based on the requirement document hierarchy. (Click to enlarge.)

In this forward impact analysis example, the child requirements of FR-25 are displayed in the Impact Analysis area. Test cases and test runs linked to the requirement FR-25 are also displayed. If the requirement changes, these dependent items should be investigated to determine if additional changes are needed.

Forward impact analysis displays downstream dependencies.

Forward impact analysis displays downstream dependencies. (Click to enlarge.)

In the following backward impact analysis example, parent requirements of requirement FR-25 are displayed. Requirement FR-25 may be affected if these requirements change.

Backward impact analysis displays upstream dependencies.

Backward impact analysis displays upstream dependencies. (Click to enlarge.)

Make Informed Decisions with Impact Analysis

TestTrack’s impact analysis tools provide a clear picture of relationships between items so you can accurately determine the impact of changing requirements. A better understanding of these relationships will help you ensure that changes are not missed and do not negatively affect the project outcome.

To learn more about TestTrack’s impact analysis tools, download our “Analyzing the Impact of Requirement Changes” guide.

Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

Preemptive solutions key for Java application development security

Kloctalk - Klocwork - Mon, 09/22/2014 - 15:00

As application continues to grow in significance for countless organizations, so, too, does the need to focus on security. Recent events have demonstrated that failure to adequately protect these resources can have significant consequences.

Writing for InfoWorld, industry expert John Matthew Holt recently singled out the need for organizations using Java to rethink their approach to application security. In particular, he emphasized the value of preemptive solutions, such as static code analysis tools.

Java risks
According to Holt, Java-based application development presents a number of unique, difficult challenges. For one thing, he pointed out that Java programmers typically import thousands of lines of code from external library sources. The problem with this strategy is that no one is tasked with ensuring that this code has received sufficient security scrutiny.

"Therefore, vulnerabilities can be repeatedly introduced into in-house code through this 'imported code backdoor,'" Holt explained. "These vulnerabilities may be unknown to the enterprise, but well-known to attackers."

The writer further noted that there are a number of exploits that cyberattackers can use to gain access to corporate networks in these cases. For example, cybercriminals may deploy SQL injection attacks.

Further compounding these issues is the fact that Java usually centers on either network-based or testing-based efforts, and neither of these has proven sufficiently effective to protect companies from external threats.

Preventative efforts
While not a complete solution in and of itself, Holt argued that application testing tools can play a major role in securing Java applications. He noted that these resources can prove highly educational for developers, identifying vulnerabilities in the code that would otherwise go unnoticed.

To fully take advantage of these capabilities and protect their Java-based development efforts, though, it is critical for businesses to look for and implement the best available application security testing tools.

Static analysis is a case in point. These tools are key for preventing code flaws from developing into full-blown security issues. With static analysis resources in place, firms can reduce the cost of testing by identifying potential defects as early as possible in the code's lifecycle. This has the further benefit of maximizing developers' productivity, allowing them to focus more on improving and introducing new features, rather than dealing with the minutia of eliminating security flaws.

Further steps
Beyond static analysis tools, firms relying on Java application development should also deploy network-based defenses. These resources, including firewalls and intrusion prevention systems, can prevent malicious outsiders from gaining access to the company's network, thereby protecting production systems from serious threats.

However, as the writer noted, these efforts have a number of shortcomings that can compromise their effectiveness. Most notably, these tools cannot simply block all traffic, or else the company would lose access to legitimate incoming information. Obviously, this would create a host of other problems.

This issue highlights just what a challenge application security can prove to be for a company, and why advanced static code analysis and related tools are so essential.

Learn more:
• Recognize, understand, and combat injection attacks by taking this online course (one of the many free courses offered in our Secure Coding Learning Center)
• See how static analysis detects vulnerabilities on the OWASP Top Ten list of common exploits

Categories: Companies

dynaTrace: Correcting Mis-Representations

It’s hard to argue with facts. That’s probably why AppDynamics’ spin machine has been hard at work lately, trying to find distorted angles and mis-representations about our capabilities. This is an attempt to distract from their own shortcomings and the fact that this year again customers on the market for a new generation APM, favored Dynatrace […]

The post dynaTrace: Correcting Mis-Representations appeared first on Compuware APM Blog.

Categories: Companies

The Curious Relationship of Culture and Tools

IBM UrbanCode - Release And Deploy - Mon, 09/22/2014 - 10:33

We surround ourselves with things that look like us, and can even be shaped by those things. Individuals look like their dogs and their cars. Meanwhile, Conways law tells us that they are constrained to build systems that mirror their organizations. The reverse is also true. The selection of a more modular architecture tends to result in an organization with more (smaller) teams.

Organizational culture is also shaped by their tools. I first observed the link between tool and culture years ago when selling continuous integration tools. CI tools integrate with a ton of stuff, and so if you’re trying to figure out your tool is a good fit for an organization you ask about source control, bug tracking, and testing tools. What I learned quickly was that knowing the source control tool an organization uses told a great deal about what they value. A Perforce shop (setting aside game studios) was one where the developer was king. Meanwhile a Harvest shop valued control over agility. Where Subversion was used by development but changes were then exported into a system of record SCM like ClearCase, you knew the organization had the silo challenges we now call a “DevOps problem.” Often, knowing the source control tool told you who in the organization you had to sell and how to do it with alarming accuracy.

Finding tools changing culture

This insight was handy in sales, but not generally interesting. What brought this back to mind was @OnCommit’s presentation at IBM Innovate 2014. There, he argued against conventional wisdom that you can change culture by implementing tools. The argument is that culture within IT is basically our norms of how we act, and our shared language. When we change a process radically by implementing new tooling, we change some behavior and the language. When everyone can clearly see that version 1.2.3 is in QA, we ask testers about the state of that version. We speak about promoting that version to UAT rather than “the latest”. In turn, we think in terms of versions rather than “the latest” and come to value versioning more. When considering infrastructure as code in a years’ time, the argument that we can version infrastructure will be more compelling because our values will have changed. An information radiator nudges culture.

Similarly, many of the companies who valued control over agility when selecting those CI tools did so because they were looking for correct, managed builds. Continuous building was either seen as undesirable or irrelevant.  It was amazing to visit this companies over the years. While still valuing control, build frequency would pick up over time, and developers would act a bit more Agile. Many of these shops are in a water-scrum-fall mode now. Part of this is simple economics, as the price (effort) of something falls, demand for it picks up. Maciej Zawadzki suggests in this video the heuristic that automating a previously manual deployment process generally results in ten-fold increase in deployment frequency.

Use tools to drive change

In Jurgen Appelo‘s  book “How to Change the World” he calls out Infrastructure as one of five pillars of changing an organization by manipulating its environment. This may mean sitting people closer together to improve collaboration, or it could be changing tools. The key is to surround people with guidance pointing them in the right direction.

Tools have a sneaky way of reforming the organizations that implement them. Organizations will often come to value what their tools value, so long as the tool is effective at what the organization selected it for. Individuals and teams who are looking to drive cultural change should be aware of this relationship.

Categories: Companies

Read it and Weep

Hiccupps - James Thomas - Sun, 09/21/2014 - 11:38
Extracts from How Complex Systems Fail  by Richard I. Cook:[it is] impossible for [complex systems] to run without multiple flaws being present. Because these are individually insufficient to cause failure they are regarded as minor factors during operations ...  complex systems run as broken systems. Organizations are ambiguous ... [and that] ambiguity is resolved by actions of practitioners at the sharp end of the system. Safety is an emergent property of systems ...  continuous systemic change insures that hazard and its management are constantly changing.... all practitioner actions are actually gambles, that is, acts that take place in the
face of uncertain outcomes ... after accidents ... post hoc analysis regards these gambles as poor ones. But ... successful outcomes are also the result of gambles ... Image: https://flic.kr/p/7TxmXk
Categories: Blogs

Introducing TestRail 4.0

Gurock Software Blog - Sun, 09/21/2014 - 09:44

Over the last months our team has been hard at work redesigning TestRail’s user experience and adding many often requested features to the application. And today I’m excited to announce the release of TestRail 4.0, a new major version of our modern test management tool!

The new version features a redesigned user interface, bulk editing for test cases, new powerful filter and grouping options, significantly improved navigation, baseline & single-suite support, new reporting options and much more. With more than 80+ new features, enhancements and fixes, this is our biggest TestRail release yet.

All TestRail Hosted accounts have been updated to the new version automatically and TestRail Server customers can upgrade their installations starting today. See below for a detailed release description and you can find additional screenshots in our updated TestRail tour. We can’t wait to see how teams will use the new version and added capabilities!

Try TestRail Now

Get started with TestRail in minutes
and try TestRail free for 30 days!

Redesigned User Experience

TestRail’s user interface has always been all about speed, ease of use and productivity. For TestRail 4.0 we carefully reviewed all areas of TestRail and improved many interface elements, the typography, dialogs, navigation and how users interact with the application. We also updated TestRail’s interface with a fresh, modern look. The resulting design and user experience makes it even easier for small and large teams to manage their testing efforts with TestRail.

run

Test Case Bulk Editing

You have always been able to quickly add test results to many tests at once. TestRail 4.0 now also introduces a powerful way to bulk edit your test cases to update any of the case attributes. Whether you want to change the preconditions of all test cases in a section, update the type of your cases in an entire suite or carefully filter and change the priority of thousands of cases at once: the new bulk editing feature makes this super easy.

bulk

You can either select the test cases you want to update manually, or you can use the bulk editing feature together with our new powerful filtering options. We’ve also updated TestRail’s suites to make it easier to move, copy and delete many test cases at once so you can better organize and refactor larger projects. In addition to editing many test cases at once, we’ve also improved assigning tests on the test run pages.

Improved Navigation

We also reviewed TestRail’s suite, run and section navigation and implemented various improvements based on customer feedback and feature requests. The result is that TestRail 4.0 now also works beautifully with huge test suites and projects when using just a single test suite, making it easy to build deep section hierarchies and organize your test cases. And the improved test case navigation makes TestRail even more productive for both small and large teams.

back Remembering where you left off

For TestRail 4.0 we invested a lot of time to make sure that the application remembers where you left off. In practice this means that TestRail now remembers the sections you previously expanded or collapsed, automatically restores the position when you use your browser’s Back button and make it easier to share links. view-modes New view modes for sections & cases

When working with larger projects it’s sometimes difficult to focus on the relevant test cases. TestRail 4.0 introduces new view modes to choose between viewing all your test cases at once, or to focus on just specific sections or sub sections. You can now conveniently switch between modes from the sidebar.

sidebar Improved scrollable, resizable sidebar

We redesigned the section and group tree on the suite and run pages to better scale for large suites and deep section hierarchies. To support this we made the sidebar resizable, added support for independent scrolling of the section list and made the list sticky when you navigate through your cases.

Suite Modes and Baselines

Based on our experience working with thousands of teams adopting TestRail over the years, we found that many teams would benefit from using just a single test suite per project and use TestRail’s flexible section hierarchies to organize their test cases instead. With the new improved suite navigation and scalability we are adding a new default mode to use a single suite per project. This means that when you click on the Test Cases tab, you are redirected directly to your test cases. We still support separate test suites for existing and new projects for teams who prefer this.

suite-modes

In addition to the single suite mode, we are adding another alternative option to use test suites for baselines and versions in your projects. If you have the need to maintain multiple separate copies and branches of your test cases for different project versions in parallel, TestRail’s new baselines make this easy. Baselines allow you to maintain a single test case repository per project and make repository copies for separate versions and branches.

baselines

Filters and Grouping Options

TestRail always had advanced filtering options to make it easy to select your test cases for new test runs and plans. With TestRail 4.0 we are extending the existing capabilities and add powerful filter, sorting and grouping options to the test suite and run pages. This means that you can now easily filter, sort and group your tests and cases by any of their attributes to make it easier to find and work with your tests.

We redesigned the suite and run pages to include a new filter and edit toolbar at the top of the pages that stays in place when you scroll through your test lists. Together with the new bulk editing options, the filter and edit toolbar makes it super easy to update, organize or remove test cases.

filters

Better Case Copying & Selection

Making it easy to reuse your test cases for different testing phases, releases and projects has always been one of TestRail’s design goals. In many scenarios you don’t need to duplicate or copy your cases, as you can simply start new test runs against your existing suites. For situations where you want to duplicate your test cases, we have completely redesigned TestRail’s copy functionality.

copy

TestRail now uses a three column layout for the Copy/Move dialog to include a full section tree and test case filters. This allows us to scale the dialog for very large projects, make it easier to select and navigate sections as well as choosing test cases based on attribute filters. We’ve also redesigned TestRail’s test case selection dialog for runs and plans in a similar way.

Improved Todo Tab

TestRail 4.0 comes with a redesigned Todo tab to integrate with and benefit from the new test run page and its filtering options. The new Todo tab makes it easy to see the workload of your team at a glance and track all tests assigned to you. You can then select any test run to drill down into the data and work with your tests.

todos

New Reporting Options

Since our big reporting release for TestRail last year, we regularly receive feedback from teams that the new reporting section is one of their favorite features. We continue to add more reporting capabilities to TestRail to make it easier for teams to gain insights from the data they track with TestRail. In the new release we are adding various new options to existing reports and introducing the Activity Summary report to easily track new and updated test cases.

case-activity

Additional Improvements

TestRail 4.0 also comes with many additional new features and improvements and we included a select list of additional enhancements below. Please see our full changelog for a complete list of changes.

section-description Section descriptions

TestRail now has support for section descriptions to include additional rich context information in your test suites. This also makes it easier to work with exploratory testing and plan your test sessions in advance.

dragdrop Drag & drop improvements

We’ve also improved the existing drag & drop functionality for copying and moving test cases to make this functionality easier to discover and use for new users. You can use drag & drop to duplicate and move test cases on the test suite pages.

api New API capabilities

TestRail’s API is especially popular for integrating third-party tools and test automation. We added new capabilities to the API such as improved filter and pagination options, added new result methods and improved the error reporting.

jira JIRA integration enhancements

For the JIRA defect plugin integration we added options to include links to other issues when you push a bug report to JIRA (e.g. “related to” or “duplicate of”). The integration now also supports JIRA custom fields that use the Label field type. steps Responsive test steps

Teams using separate test steps in TestRail now benefit from UI improvements to edit test steps and expected results side-by-side. The UI automatically adjusts based on your browser size and you can now also resize the edit boxes.

Getting the new version

We recommend upgrading to the new version to benefit from the new functionality and redesigned interface. Upgrading TestRail is very easy and we’ve included all the required details below, depending on the edition you use:

  • TestRail Hosted: your account has already been updated!
  • TestRail Server (licensed): you can download the latest version or renew your support plan from your customer portal account.
  • TestRail Server (trial): please contact us to upgrade your download trial.
  • New user: want to try TestRail? Get a free trial.

You can read the full change log to learn more about all new features, improvements and bug fixes included in TestRail 4.0. If you have any questions or feedback about the new version, please let us know!

Older browser support in TestRail 4.0: similar to most other applications we are dropping support for older web browsers in new releases from time to time. Specifically, we are dropping support for Internet Explorer 7.x and 8.x with TestRail 4.0. If you cannot upgrade Internet Explorer we recommend using Google Chrome to access TestRail.

Categories: Companies

CloudBees Around the World - September 2014


CloudBees employees travel the world to a lot of interesting events. Where you can find the Bees before September ends? Hint: This month, it's all about JavaOne. If you are there, be sure to connect with us!
  • JavaOne San Francisco – September 28 - October 2Take advantage of tools to help you generate awareness, enthusiasm and participation for the Java event of the year. You can choose from more than 400 sessions, including technical sessions, hands-on labs, tutorials, keynotes and birds-of-a-feather sessions. Learn from the world's foremost Java experts, improve your working knowledge and coding expertise, follow in-depth technical tracks to focus on the Java technology that interests you the most. To register and get more information just click on the links.




Categories: Companies

Skeleton Key

Sonatype Blog - Fri, 09/19/2014 - 18:14
A skeleton key is capable of opening any lock regardless of make or type. Do you know anyone who has one? I do. Lots of them. At the HP Protect conference last week in Washington DC, the theme of their conference was “think like a bad guy”. They introduced us to known hackers, their approaches to...

To read more, visit our blog at blog.sonatype.com.
Categories: Companies

How We Really Need to Stop ISO 29119

uTest - Fri, 09/19/2014 - 16:48

After some real consideration, I have decided to sign the Stop 29119 petition, and along the way also signed the Professional Tester’s Manifesto.stop-29119

The main reason that really resonates with me is that companies, who would normally not use the standard, would be compelled to comply with it just to win business. If there are even a few companies that conform to the standard which are successful, and it doesn’t have to be because they comply with the standard, others will try to follow their path.

At some point, almost every company complies with the standard, and no one knows the reason, only just that the paperwork is unbearable, there isn’t any room for actual testing, and they are afraid to step out of this vicious circle. I do not wish for the testing field to go through this, and that is why I have signed the petition.

But here is where it gets tricky: I think the people who started this opposition to stop the ISO should have thought more about their actions before jumping the gun. One of the few problems I have with this course of opposition is that it gives too much power to the body behind the standard. After some time, all this opposition will turn into just information. People searching for testing-related information may come across all these countless blogs against 29119, and the only thing they will do is research the standard and tell themselves that so many people wrote about it, they should try it, and maybe convince their companies to comply with it.

Even negative advertising is still advertising — it is always of some value to the product being advertised — and gives it some kind of power in the form of public awareness. The proof can be, for example, the ISTQB. As a new tester few years ago, I wanted to get certified (I didn’t) because everybody was talking about it. It was not in a good light, but I still thought it would help me land a good job. There weren’t any other options, so what should a new tester do in this case?

If you really want to stop 29119 now, you need to go all the way to do so, not just write a blog post about it and move on. You need to get bigger in numbers and more powerful along the way until you truly stop it. In our field, I think getting more and more powerful is pretty hard to achieve. I would even go so far as to say that it is almost impossible.

But I think there was another way to go about all this: Ignore the standard. Let it die on its own. The thing is it is a standard — not a law. You do not have to abide by it. You can ignore it. It’s a product just as any other. The body itself wouldn’t be able to advertise it or raise public awareness about it. Companies and testers wouldn’t use it and wouldn’t care about it. It would die. The proof lies in the predecessors of this new standard. All are dead trying to start fresh under this “new” old standard.

Now, it’s too late for that — only the petition is the way to go. We do not know what will happen after this wave of opposition has given the standard so much power. I just hope we didn’t cut the branch we were standing on.

Marek (Mark) Langhans is a Gold-rated tester and former Forums Moderator in the uTest Community, and hails from Prague. Mark has tested information systems, web, mobile and desktop applications of domestic financial institutions for the past couple of years.

Categories: Companies

Defending Network Performance with Packet Level Detail

My favorite war room accusation is: “It’s always the network at fault!” Whether you’re the one taking the blame or the one pointing the finger likely has everything to do with which seat you occupy in that war room. I suppose that comes with the territory, because at the same time there seems to be […]

The post Defending Network Performance with Packet Level Detail appeared first on Compuware APM Blog.

Categories: Companies

Australia’s Department of Immigration sees tremendous results with open source

Kloctalk - Klocwork - Fri, 09/19/2014 - 15:00

As open source software continues to grow in popularity, more and more organizations are realizing the advantages offered by these solutions. A key example of the potential inherent to such an approach can be found in Australia, where the Department of Immigration recently leveraged open source to achieve major results on a tight budget, iTnews reported.

A new approach
According to the news source, the Department of Immigration needed to develop a means of sorting through millions of visitors to Australia, but was only provided a budget of $1 million.

To meet this challenge, the department turned to open source software, as Gavin McCairns, chief risk officer for the Department of Immigration, explained at the recent Technology in Government forum in Canberra. He noted that the initial pilot only cost $50,000, the bulk of which was spent on hiring a consultant to provide training and guidance to employees who lacked expertise in open source.

"We developed an approach based on phases of prototype, pilot and production. It was based on the idea of trying stuff for nothing or very cheap," said McCairns, the source reported.

McCairns explained that this project's goal was to reduce the number of passengers waiting in line for immigration officials' assistance in airports by making it faster and easier to travel into Australia.

ITnews noted that the country's current holiday visa system typically receives nearly 300,000 applications per year. With the new open source system in place, the Department of Immigration is able to sort through these documents far more quickly and effectively than previous approaches.

Open source advantages
McCairns asserted that government agencies too frequently overlook open source options in favor of proprietary software that is both more expensive and less fitting for their specific needs. For example, he noted that his own department previously invested $15 million in software solutions that ultimately were left underutilized, as no one really knew how to use the technology properly.

Open source avoids this outcome not only because it does not require such an initial investment, but also because the software can be tweaked and modified to meet the organization's needs. Proprietary solutions, on the other hand, are more or less set in stone.

Caution needed
However, that being said, it is important for decision-makers to realize that there are still limits to what open source can accomplish and risks involved in its use.

The news source reported that Dirk Klein, general manager of ANZ public sector markets at SAS, emphasized that open source software implementation requires significant investment in terms of human resources and ongoing maintenance. While this certainly doesn't mean that open source software is more expensive than proprietary solutions, it is crucial for firms to realize that there will be a cost associated with this approach, despite the software being free.

Additionally, Klein told iTnews there are risk issues involved in an open source deployment. Organizations utilizing such resources need to be keenly aware of the dangers specific to open source software and take proactive steps to achieve and maintain security.

To fully protect the organization while pursuing open source solutions, decision-makers should make sure to invest in the appropriate complementary tools. Scanning and auditing resources are critical for ensuring that firms using open source do so without risk of a security breach or licensing violation. These tools can identify issues early, before they become a reality, enabling the organization to take corrective measures before any serious costs develop. Without the right support system in place, even skilled open source software developers may overlook crucial security and licensing issues.

Categories: Companies

Works on my machine

The Social Tester - Fri, 09/19/2014 - 14:24
I’m proud to announce a new range of merchandise from The Social Tester. I’m promoting my new online store at Zazzle. My first design, available in two flavors, is the classic “Works on my machine”. With the alternative “Doesn’t work on my machine”. Each month I’m hoping to put a new design online. The service […]
Categories: Blogs

Even better – communicating while drawing!

Agile Testing with Lisa Crispin - Thu, 09/18/2014 - 22:39
Drawings explaining the business rules for a feature

Drawings explaining the business rules for a feature

In my previous post I told about a great experience collaborating with teammates on a difficult new feature. The conversations around this continued this week when we were all in the office. Many of our remaining questions were answered when the programmers walked us through a whiteboard drawing of how the new algorithm should work. As we talked, we updated and improved the drawings. We took pictures to attach to the pertinent story.

Not only is a picture worth a thousand words, but talking with a marker in hand and a whiteboard nearby (or their virtual equivalents) greatly enhances communication.

The post Even better – communicating while drawing! appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Top Tweets from Let’s Test Oz 2014

uTest - Thu, 09/18/2014 - 21:41

The popular Let’s Test Oz conference just wrapped up in Sydney, Australia. It ran from September 15-17 and featured three full days of tutorials, keynotes, and sessions from noted industry experts like James Bach.

We didn’t get down to Oz to attend the event, but we were able to follow along with the event on Twitter. Here are some top tweets from the show. And, if you want to see more, check out tweets that are tagged with #LetsTestOz on Twitter.

#letstestoz auditorium! pic.twitter.com/LNaQcl1CHK

— David Greenlees (@MartialTester) September 15, 2014

In the context driven community those who don't know how to test are respected.. as long as they are learning – @jamesmarcusbach #LetsTestOz

— Alessandra Moreira (@testchick) September 15, 2014

Team 1 developing responses to team 2 leadership issue in @FionaCCharles #LetsTestOz leadership workshop pic.twitter.com/48s743JRtj

— Craig McKirdy (@craigmckirdy) September 15, 2014

You must be open to learn things in different ways #ContextDriven @jamesmarcusbach #LetsTestOz

— Simon P. Schrijver (@SimonSaysNoMore) September 15, 2014

Be aware of your biases in #testing and in #coaching, learn to de-bias yourself – @jamesmarcusbach. Do some 'instinct grooming' #LetsTestOz

— Adam Howard (@adammhoward) September 15, 2014

Day2 of #LetsTestOz kicked off by @KeithKlain pic.twitter.com/3NPWa3Akmj

— James Aspinall (@ThePeopleTester) September 15, 2014

Presenting the naboo n1 starfighter design and developed ship #letstestoz #lego #communicatingspecs pic.twitter.com/9MrgukKdVg

— Sigurdur Birgisson (@siggeb) September 16, 2014

Great things happen when testers become champions of business goals. -@vds4 #letstestoz

— Aaron Hodder (@AWGHodder) September 16, 2014

Awesome honesty from Margaret Dineen sharing her experience in a project that didn't end well. So rare to hear these. #LetsTestOz

— Katrina Clokie (@katrina_tester) September 17, 2014

Hey testers! If you're not mentally exhausted after attending a conference, you might be at the wrong one… #LetsTestOz @LetsTest_Conf

— Keith Klain (@KeithKlain) September 17, 2014

To see what other events are upcoming in the software testing world, make sure to check out our revamped Events Calendar.

Categories: Companies

Knowledge Sharing

Telerik Test Studio is all-in-one testing solution that makes software testing easy. SpiraTest is the most powerful and affordable test management solution on the market today