Skip to content

Feed aggregator

Two New Videos About Testing at Google

Google Testing Blog - Thu, 03/27/2014 - 23:41
by Anthony Vallone

We have two excellent, new videos to share about testing at Google. If you are curious about the work that our Test Engineers (TEs) and Software Engineers in Test (SETs) do, you’ll find both of these videos very interesting.

The Life at Google team produced a video series called Do Cool Things That Matter. This series includes a video from an SET and TE on the Maps team (Sean Jordan and Yvette Nameth) discussing their work on the Google Maps team.

Meet Yvette and Sean from the Google Maps Test Team



The Google Students team hosted a Hangouts On Air event with several Google SETs (Diego Salas, Karin Lundberg, Jonathan Velasquez, Chaitali Narla, and Dave Chen) discussing the SET role.

Software Engineers in Test at Google - Covering your (Code)Bases



Interested in joining the ranks of TEs or SETs at Google? Search for Google test jobs.

Categories: Blogs

Optimal Logging

Google Testing Blog - Thu, 03/27/2014 - 23:41
by Anthony Vallone

How long does it take to find the root cause of a failure in your system? Five minutes? Five days? If you answered close to five minutes, it’s very likely that your production system and tests have great logging. All too often, seemingly unessential features like logging, exception handling, and (dare I say it) testing are an implementation afterthought. Like exception handling and testing, you really need to have a strategy for logging in both your systems and your tests. Never underestimate the power of logging. With optimal logging, you can even eliminate the necessity for debuggers. Below are some guidelines that have been useful to me over the years.


Channeling Goldilocks

Never log too much. Massive, disk-quota burning logs are a clear indicator that little thought was put in to logging. If you log too much, you’ll need to devise complex approaches to minimize disk access, maintain log history, archive large quantities of data, and query these large sets of data. More importantly, you’ll make it very difficult to find valuable information in all the chatter.

The only thing worse than logging too much is logging too little. There are normally two main goals of logging: help with bug investigation and event confirmation. If your log can’t explain the cause of a bug or whether a certain transaction took place, you are logging too little.

Good things to log:
  • Important startup configuration
  • Errors
  • Warnings
  • Changes to persistent data
  • Requests and responses between major system components
  • Significant state changes
  • User interactions
  • Calls with a known risk of failure
  • Waits on conditions that could take measurable time to satisfy
  • Periodic progress during long-running tasks
  • Significant branch points of logic and conditions that led to the branch
  • Summaries of processing steps or events from high level functions - Avoid logging every step of a complex process in low-level functions.

Bad things to log:
  • Function entry - Don’t log a function entry unless it is significant or logged at the debug level.
  • Data within a loop - Avoid logging from many iterations of a loop. It is OK to log from iterations of small loops or to log periodically from large loops.
  • Content of large messages or files - Truncate or summarize the data in some way that will be useful to debugging.
  • Benign errors - Errors that are not really errors can confuse the log reader. This sometimes happens when exception handling is part of successful execution flow.
  • Repetitive errors - Do not repetitively log the same or similar error. This can quickly fill a log and hide the actual cause. Frequency of error types is best handled by monitoring. Logs only need to capture detail for some of those errors.


There is More Than One Level

Don't log everything at the same log level. Most logging libraries offer several log levels, and you can enable certain levels at system startup. This provides a convenient control for log verbosity.

The classic levels are:
  • Debug - verbose and only useful while developing and/or debugging.
  • Info - the most popular level.
  • Warning - strange or unexpected states that are acceptable.
  • Error - something went wrong, but the process can recover.
  • Critical - the process cannot recover, and it will shutdown or restart.

Practically speaking, only two log configurations are needed:
  • Production - Every level is enabled except debug. If something goes wrong in production, the logs should reveal the cause.
  • Development & Debug - While developing new code or trying to reproduce a production issue, enable all levels.


Test Logs Are Important Too

Log quality is equally important in test and production code. When a test fails, the log should clearly show whether the failure was a problem with the test or production system. If it doesn't, then test logging is broken.

Test logs should always contain:
  • Test execution environment
  • Initial state
  • Setup steps
  • Test case steps
  • Interactions with the system
  • Expected results
  • Actual results
  • Teardown steps


Conditional Verbosity With Temporary Log Queues

When errors occur, the log should contain a lot of detail. Unfortunately, detail that led to an error is often unavailable once the error is encountered. Also, if you’ve followed advice about not logging too much, your log records prior to the error record may not provide adequate detail. A good way to solve this problem is to create temporary, in-memory log queues. Throughout processing of a transaction, append verbose details about each step to the queue. If the transaction completes successfully, discard the queue and log a summary. If an error is encountered, log the content of the entire queue and the error. This technique is especially useful for test logging of system interactions.


Failures and Flakiness Are Opportunities

When production problems occur, you’ll obviously be focused on finding and correcting the problem, but you should also think about the logs. If you have a hard time determining the cause of an error, it's a great opportunity to improve your logging. Before fixing the problem, fix your logging so that the logs clearly show the cause. If this problem ever happens again, it’ll be much easier to identify.

If you cannot reproduce the problem, or you have a flaky test, enhance the logs so that the problem can be tracked down when it happens again.

Using failures to improve logging should be used throughout the development process. While writing new code, try to refrain from using debuggers and only use the logs. Do the logs describe what is going on? If not, the logging is insufficient.


Might As Well Log Performance Data

Logged timing data can help debug performance issues. For example, it can be very difficult to determine the cause of a timeout in a large system, unless you can trace the time spent on every significant processing step. This can be easily accomplished by logging the start and finish times of calls that can take measurable time:
  • Significant system calls
  • Network requests
  • CPU intensive operations
  • Connected device interactions
  • Transactions


Following the Trail Through Many Threads and Processes

You should create unique identifiers for transactions that involve processing across many threads and/or processes. The initiator of the transaction should create the ID, and it should be passed to every component that performs work for the transaction. This ID should be logged by each component when logging information about the transaction. This makes it much easier to trace a specific transaction when many transactions are being processed concurrently.


Monitoring and Logging Complement Each Other

A production service should have both logging and monitoring. Monitoring provides a real-time statistical summary of the system state. It can alert you if a percentage of certain request types are failing, it is experiencing unusual traffic patterns, performance is degrading, or other anomalies occur. In some cases, this information alone will clue you to the cause of a problem. However, in most cases, a monitoring alert is simply a trigger for you to start an investigation. Monitoring shows the symptoms of problems. Logs provide details and state on individual transactions, so you can fully understand the cause of problems.

Categories: Blogs

The Google Test and Development Environment - Pt. 1: Office and Equipment

Google Testing Blog - Thu, 03/27/2014 - 23:40
by Anthony Vallone

When conducting interviews, I often get questions about our workspace and engineering environment. What IDEs do you use? What programming languages are most common? What kind of tools do you have for testing? What does the workspace look like?

Google is a company that is constantly pushing to improve itself. Just like software development itself, most environment improvements happen via a bottom-up approach. All engineers are responsible for fine-tuning, experimenting with, and improving our process, with a goal of eliminating barriers to creating products that amaze.

Office space and engineering equipment can have a considerable impact on productivity. I’ll focus on these areas of our work environment in this first article of a series on the topic.

Office layout

Google is a highly collaborative workplace, so the open floor plan suits our engineering process. Project teams composed of Software Engineers (SWEs), Software Engineers in Test (SETs), and Test Engineers (TEs) all sit near each other or in large rooms together. The test-focused engineers are involved in every step of the development process, so it’s critical for them to sit with the product developers. This keeps the lines of communication open.

Google Munich
The office space is far from rigid, and teams often rearrange desks to suit their preferences. The facilities team recently finished renovating a new floor in the New York City office, and after a day of engineering debates on optimal arrangements and white board diagrams, the floor was completely transformed.

Besides the main office areas, there are lounge areas to which Googlers go for a change of scenery or a little peace and quiet. If you are trying to avoid becoming a casualty of The Great Foam Dart War, lounges are a great place to hide.

Google Dublin
Working with remote teams

Google’s worldwide headquarters is in Mountain View, CA, but it’s a very global company, and our project teams are often distributed across multiple sites. To help keep teams well connected, most of our conference rooms have video conferencing equipment. We make frequent use of this equipment for team meetings, presentations, and quick chats.

Google Boston
What’s at your desk?

All engineers get high-end machines and have easy access to data center machines for running large tasks. A new member on my team recently mentioned that his Google machine has 16 times the memory of the machine at his previous company.

Most Google code runs on Linux, so the majority of development is done on Linux workstations. However, those that work on client code for Windows, OS X, or mobile, develop on relevant OSes. For displays, each engineer has a choice of either two 24 inch monitors or one 30 inch monitor. We also get our choice of laptop, picking from various models of Chromebook, MacBook, or Linux. These come in handy when going to meetings, lounges, or working remotely.

Google Zurich
Thoughts?

We are interested to hear your thoughts on this topic. Do you prefer an open-office layout, cubicles, or private offices? Should test teams be embedded with development teams, or should they operate separately? Do the benefits of offering engineers high-end equipment outweigh the costs?

(Continue to part 2)
Categories: Blogs

The Google Test and Development Environment - Pt. 2: Dogfooding and Office Software

Google Testing Blog - Thu, 03/27/2014 - 23:40
by Anthony Vallone

This is the second in a series of articles about our work environment. See the first.

There are few things as frustrating as getting hampered in your work by a bug in a product you depend on. What if it’s a product developed by your company? Do you report/fix the issue or just work around it and hope it’ll go away soon? In this article, I’ll cover how and why Google dogfoods its own products.

Dogfooding

Google makes heavy use of its own products. We have a large ecosystem of development/office tools and use them for nearly everything we do. Because we use them on a daily basis, we can dogfood releases company-wide before launching to the public. These dogfood versions often have features unavailable to the public but may be less stable. Instability is exactly what you want in your tools, right? Or, would you rather that frustration be passed on to your company’s customers? Of course not!

Dogfooding is an important part of our test process. Test teams do their best to find problems before dogfooding, but we all know that testing is never perfect. We often get dogfood bug reports for edge and corner cases not initially covered by testing. We also get many comments about overall product quality and usability. This internal feedback has, on many occasions, changed product design.

Not surprisingly, test-focused engineers often have a lot to say during the dogfood phase. I don’t think there is a single public-facing product that I have not reported bugs on. I really appreciate the fact that I can provide feedback on so many products before release.

Interested in helping to test Google products? Many of our products have feedback links built-in. Some also have Beta releases available. For example, you can start using Chrome Beta and help us file bugs.

Office software

From system design documents, to test plans, to discussions about beer brewing techniques, our products are used internally. A company’s choice of office tools can have a big impact on productivity, and it is fortunate for Google that we have such a comprehensive suite. The tools have a consistently simple UI (no manual required), perform very well, encourage collaboration, and auto-save in the cloud. Now that I am used to these tools, I would certainly have a hard time going back to the tools of previous companies I have worked. I’m sure I would forget to click the save buttons for years to come.

Examples of frequently used tools by engineers:
  • Google Drive Apps (Docs, Sheets, Slides, etc.) are used for design documents, test plans, project data, data analysis, presentations, and more.
  • Gmail and Hangouts are used for email and chat.
  • Google Calendar is used to schedule all meetings, reserve conference rooms, and setup video conferencing using Hangouts.
  • Google Maps is used to map office floors.
  • Google Groups are used for email lists.
  • Google Sites are used to host team pages, engineering docs, and more.
  • Google App Engine hosts many corporate, development, and test apps.
  • Chrome is our primary browser on all platforms.
  • Google+ is used for organizing internal communities on topics such as food or C++, and for socializing.

Thoughts?

We are interested to hear your thoughts on this topic. Do you dogfood your company’s products? Do your office tools help or hinder your productivity? What office software and tools do you find invaluable for your job? Could you use Google Docs/Sheets for large test plans?

(Continue to part 3)
Categories: Blogs

The Google Test and Development Environment - Pt. 3: Code, Build, and Test

Google Testing Blog - Thu, 03/27/2014 - 23:40
by Anthony Vallone

This is the third in a series of articles about our work environment. See the first and second.

I will never forget the awe I felt when running my first load test on my first project at Google. At previous companies I’ve worked, running a substantial load test took quite a bit of resource planning and preparation. At Google, I wrote less than 100 lines of code and was simulating tens of thousands of users after just minutes of prep work. The ease with which I was able to accomplish this is due to the impressive coding, building, and testing tools available at Google. In this article, I will discuss these tools and how they affect our test and development process.

Coding and building

The tools and process for coding and building make it very easy to change production and test code. Even though we are a large company, we have managed to remain nimble. In a matter of minutes or hours, you can edit, test, review, and submit code to head. We have achieved this without sacrificing code quality by heavily investing in tools, testing, and infrastructure, and by prioritizing code reviews.

Most production and test code is in a single, company-wide source control repository (open source projects like Chromium and Android have their own). There is a great deal of code sharing in the codebase, and this provides an incredible suite of code to build on. Most code is also in a single branch, so the majority of development is done at head. All code is also navigable, searchable, and editable from the browser. You’ll find code in numerous languages, but Java, C++, Python, Go, and JavaScript are the most common.

Have a strong preference for editor? Engineers are free to choose from many IDEs and editors. The most common are Eclipse, Emacs, Vim, and IntelliJ, but many others are used as well. Engineers that are passionate about their prefered editors have built up and shared some truly impressive editor plugins/tooling over the years.

Code reviews for all submissions are enforced via source control tooling. This also applies to test code, as our test code is held to the same standards as production code. The reviews are done via web-based code review tools that even include automatically generated test results. The process is very streamlined and efficient. Engineers can change and submit code in any part of the repository, but it must get reviewed by owners of the code being changed. This is great, because you can easily change code that your team depends on, rather than merely request a change to code you do not own.

The Google build system is used for building most code, and it is designed to work across many languages and platforms. It is remarkably simple to define and build targets. You won’t be needing that old Makefile book.

Running jobs and tests

We have some pretty amazing machine and job management tools at Google. There is a generally available pool of machines in many data centers around the globe. The job management service makes it very easy to start jobs on arbitrary machines in any of these data centers. Failing machines are automatically removed from the pool, so tests rarely fail due to machine issues. With a little effort, you can also set up monitoring and pager alerting for your important jobs.

From any machine you can spin up a massive number of tests and run them in parallel across many machines in the pool, via a single command. Each of these tests are run in a standard, isolated environment, so we rarely run into the “it works on my machine!” issue.

Before code is submitted, presubmit tests can be run that will find all tests that depend transitively on the change and run them. You can also define presubmit rules that run checks on a code change and verify that tests were run before allowing submission.

Once you’ve submitted test code, the build and test system automatically registers the test, and starts building/testing continuously. If the test starts failing, your team will get notification emails. You can also visit a test dashboard for your team and get details about test runs and test data. Monitoring the build/test status is made even easier with our build orbs designed and built by Googlers. These small devices will glow red if the build starts failing. Many teams have had fun customizing these orbs to various shapes, including a statue of liberty with a glowing torch.

Statue of LORBerty
Running larger integration and end-to-end tests takes a little more work, but we have some excellent tools to help with these tests as well: Integration test runners, hermetic environment creation, virtual machine service, web test frameworks, etc.

The impact

So how do these tools actually affect our productivity? For starters, the code is easy to find, edit, review, and submit. Engineers are free to choose tools that make them most productive. Before and after submission, running small tests is trivial, and running large tests is relatively easy. Since tests are easy to create and run, it’s fairly simple to maintain a green build, which most teams do most of the time. This allows us to spend more time on real problems and less on the things that shouldn’t even be problems. It allows us to focus on creating rigorous tests. It dramatically accelerates the development process that can prototype Gmail in a day and code/test/release service features on a daily schedule. And, of course, it lets us focus on the fun stuff.

Thoughts?

We are interested to hear your thoughts on this topic. Google has the resources to build tools like this, but would small or medium size companies benefit from a similar investment in its infrastructure? Did Google create the infrastructure or did the infrastructure create Google?

Categories: Blogs

Minimizing Unreproducible Bugs

Google Testing Blog - Thu, 03/27/2014 - 23:39


by Anthony Vallone

Unreproducible bugs are the bane of my existence. Far too often, I find a bug, report it, and hear back that it’s not a bug because it can’t be reproduced. Of course, the bug is still there, waiting to prey on its next victim. These types of bugs can be very expensive due to increased investigation time and overall lifetime. They can also have a damaging effect on product perception when users reporting these bugs are effectively ignored. We should be doing more to prevent them. In this article, I’ll go over some obvious, and maybe not so obvious, development/testing guidelines that can reduce the likelihood of these bugs from occurring.


Avoid and test for race conditions, deadlocks, timing issues, memory corruption, uninitialized memory access, memory leaks, and resource issues

I am lumping together many bug types in this section, but they are all related somewhat by how we test for them and how disproportionately hard they are to reproduce and debug. The root cause and effect can be separated by milliseconds or hours, and stack traces might be nonexistent or misleading. A system may fail in strange ways when exposed to unusual traffic spikes or insufficient resources. Race conditions and deadlocks may only be discovered during unique traffic patterns or resource configurations. Timing issues may only be noticed when many components are integrated and their performance parameters and failure/retry/timeout delays create a chaotic system. Memory corruption or uninitialized memory access may go unnoticed for a large percentage of calls but become fatal for rare states. Memory leaks may be negligible unless the system is exposed to load for an extended period of time.

Guidelines for development:

  • Simplify your synchronization logic. If it’s too hard to understand, it will be difficult to reproduce and debug complex concurrency problems.
  • Always obtain locks in the same order. This is a tried-and-true guideline to avoid deadlocks, but I still see code that breaks it periodically. Define an order for obtaining multiple locks and never change that order.
  • Don’t optimize by creating many fine-grained locks, unless you have verified that they are needed. Extra locks increase concurrency complexity.
  • Avoid shared memory, unless you truly need it. Shared memory access is very easy to get wrong, and the bugs may be quite difficult to reproduce.

Guidelines for testing:

  • Stress test your system regularly. You don't want to be surprised by unexpected failures when your system is under heavy load.
  • Test timeouts. Create tests that mock/fake dependencies to test timeout code. If your timeout code does something bad, it may cause a bug that only occurs under certain system conditions.
  • Test with debug and optimized builds. You may find that a well behaved debug build works fine, but the system fails in strange ways once optimized.
  • Test under constrained resources. Try reducing the number of data centers, machines, processes, threads, available disk space, or available memory. Also try simulating reduced network bandwidth.
  • Test for longevity. Some bugs require a long period of time to reveal themselves. For example, persistent data may become corrupt over time.
  • Use dynamic analysis tools like memory debuggers, ASan, TSan, and MSan regularly. They can help identify many categories of unreproducible memory/threading issues.


Enforce preconditions

I’ve seen many well-meaning functions with a high tolerance for bad input. For example, consider this function:

void ScheduleEvent(int timeDurationMilliseconds) {
if (timeDurationMilliseconds <= 0) {
timeDurationMilliseconds = 1;
}
...
}

This function is trying to help the calling code by adjusting the input to an acceptable value, but it may be doing damage by masking a bug. The calling code may be experiencing any number of problems described in this article, and passing garbage to this function will always work fine. The more functions that are written with this level of tolerance, the harder it is to trace back to the root cause, and the more likely it becomes that the end user will see garbage. Enforcing preconditions, for instance by using asserts, may actually cause a higher number of failures for new systems, but as systems mature, and many minor/major problems are identified early on, these checks can help improve long-term reliability.

Guidelines for development:

  • Enforce preconditions in your functions unless you have a good reason not to.


Use defensive programming

Defensive programming is another tried-and-true technique that is great at minimizing unreproducible bugs. If your code calls a dependency to do something, and that dependency quietly fails or returns garbage, how does your code handle it? You could test for situations like this via mocking or faking, but it’s even better to have your production code do sanity checking on its dependencies. For example:

double GetMonthlyLoanPayment() {
double rate = GetTodaysInterestRateFromExternalSystem();
if (rate < 0.001 || rate > 0.5) {
throw BadInterestRate(rate);
}
...
}

Guidelines for development:

  • When possible, use defensive programming to verify the work of your dependencies with known risks of failure like user-provided data, I/O operations, and RPC calls.

Guidelines for testing:

  • Use fuzz testing to test your systems hardiness when enduring bad data.


Don’t hide all errors from the user

There has been a trend in recent years toward hiding failures from users at all costs. In many cases, it makes perfect sense, but in some, we have gone overboard. Code that is very quiet and permissive during minor failures will allow an uninformed user to continue working in a failed state. The software may ultimately reach a fatal tipping point, and all the error conditions that led to failure have been ignored. If the user doesn’t know about the prior errors, they will not be able to report them, and you may not be able to reproduce them.

Guidelines for development:

  • Only hide errors from the user when you are certain that there is no impact to system state or the user.
  • Any error with impact to the user should be reported to the user with instructions for how to proceed. The information shown to the user, combined with data available to an engineer, should be enough to determine what went wrong.


Test error handling

The most common sections of code to remain untested is error handling code. Don’t skip test coverage here. Bad error handling code can cause unreproducible bugs and create great risk if it does not handle fatal errors well.

Guidelines for testing:

  • Always test your error handling code. This is usually best accomplished by mocking or faking the component triggering the error.
  • It’s also a good practice to examine your log quality for all types of error handling.


Check for duplicate keys

If unique identifiers or data access keys are generated using random data or are not guaranteed to be globally unique, duplicate keys may cause data corruption or concurrency issues. Key duplication bugs are very difficult to reproduce.

Guidelines for development:

  • Try to guarantee uniqueness of all keys.
  • When not possible to guarantee unique keys, check if the recently generated key is already in use before using it.
  • Watch out for potential race conditions here and avoid them with synchronization.


Test for concurrent data access

Some bugs only reveal themselves when multiple clients are reading/writing the same data. Your stress tests might be covering cases like these, but if they are not, you should have special tests for concurrent data access. Case like these are often unreproducible. For example, a user may have two instances of your app running against the same account, and they may not realize this when reporting a bug.

Guidelines for testing:

  • Always test for concurrent data access if it’s a feature of the system. Actually, even if it’s not a feature, verify that the system rejects it. Testing concurrency can be challenging. An approach that usually works for me is to create many worker threads that simultaneously attempt access and a master thread that monitors and verifies that some number of attempts were indeed concurrent, blocked or allowed as expected, and all were successful. Programmatic post-analysis of all attempts and changing system state may also be necessary to ensure that the system behaved well.


Steer clear of undefined behavior and non-deterministic access to data

Some APIs and basic operations have warnings about undefined behavior when in certain states or provided with certain input. Similarly, some data structures do not guarantee an iteration order (example: Java’s Set). Code that ignores these warnings may work fine most of the time but fail in unusual ways that are hard to reproduce.

Guidelines for development:

  • Understand when the APIs and operations you use might have undefined behavior and prevent those conditions.
  • Do not depend on data structure iteration order unless it is guaranteed. It is a common mistake to depend on the ordering of sets or associative arrays.


Log the details for errors or test failures

Issues described in this article can be easier to reproduce and debug when the logs contain enough detail to understand the conditions that led to an error.

Guidelines for development:

  • Follow good logging practices, especially in your error handling code.
  • If logs are stored on a user’s machine, create an easy way for them to provide you the logs.

Guidelines for testing:

  • Save your test logs for potential analysis later.


Anything to add?

Have I missed any important guidelines for minimizing these bugs? What is your favorite hard-to-reproduce bug that you discovered and resolved?

Categories: Blogs

Selenium Automated UX Compliance Testing

Software Testing Magazine - Thu, 03/27/2014 - 22:48
With Selenium and Jenkins, you can extend Selenium processes to include screenshot comparisons, enabling automatic UX compliance at the speed of your Continuous Integration workflow. Learn how to compare screenshots from one Jenkins run to a repository of known quality, in order to insure that not only does your website work well, it also looks well. Video producer: http://www.seleniumconf.org/
Categories: Communities

iOS 7.1 Now Available on Sauce

Sauce Labs - Thu, 03/27/2014 - 19:22

iOS 7.1 testingWe’re back with more iOS platforms! We’ve just added support for iOS 7.1 on Sauce for both Appium and Webdriver tests. Check out the platforms page to grab the desired capabilities code, or just add iOS 7.1 to your existing capabilities. Test away!

Categories: Companies

STAREAST, Orlando, Florida,May 4-9 2014

Software Testing Magazine - Thu, 03/27/2014 - 19:19
STAREAST is a conference for software testers and quality assurance professionals. It presents up-to-date information, tools, and technologies available in the software testing domain today. You will be able to attend conference presentations and half- or full-day tutorials. In the agenda up you can find topics like “The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More”, “Measurement and Metrics for Test Managers”, “Testing the Data Warehouse – Big Data, Big Problems”, “Testing with Limited, Vague, and Missing Requirements”, “Root Cause Analysis for Software Testers”, “Principles Before Practices: Transform Your Testing ...
Categories: Communities

Android-Windows “Combo-Phone” Dead

uTest - Thu, 03/27/2014 - 16:43

WindowsPhone8vsAndroid41_AZL2628Huawei, one of the world’s largest smartphone vendors, revealed plans to launch an Android-Windows dual-OS mobile device at the Mobile World Congress in Barcelona last month. But now there’s a small change of plans: They’re not doing it anymore.

“Most of our products are based on Android OS, [and] at this stage there are no plans to launch a dual-OS smartphone in the near future,” Huawei said in a statement to FireWireless. However, they will continue to support Android and Windows phones separately.

This comes as a blow to consumers looking forward to running both Android and Windows operating systems on their phone, but mostly to Microsoft, whose Windows Phone usage lags behind Google’s Android OS and Apple’s iOS.

Partnering with Android was supposed to expose Windows Phone capabilities to a broader audience. There was a better chance of consumers buying the phone if there was Android on it too. But there is apparently not enough incentive for Google to allow Microsoft’s OS to coexist on the same device. It seems as if the vision for this phone was all too narrow. Dual-operating systems might seem twice as cool, but it would have also made a software tester’s scope of operations twice as complicated.

The first list outlines the benefits of having a dual-OS device. The second outlines why this system may not have been be so beneficial after all.

The advantages:

  1. The ability to organize between two OS. Business by day, personal by night, perhaps?
  2. Compatibility. The customer can use software/hardware even if it’s only supported on one OS.
  3. Share programs between the different OS. Only those which are compatible with both, but yes, you can share.
  4. Share your Hard Disk Drive. Although you can use two, you can save everything to one HDD if you wish.
  5. Impress your friends because instead of one OS, you have two.

The disadvantages:

  1. Testing and developing for one mobile operating system is hard enough. Imagine testing a phone that has two! Think of the checklist this device will have to pass before entering the market: Performance, compatibility, security, seamless user experience, processor speed, battery life, boot time, to name a few.
  2. New unforeseen security issues. As technology advances, testing will always become more complicated. Ensuring that two operating systems are secure will also be twice as complicated.
  3. Sharing programs between the different OS. As mentioned before, only those that are compatible with both operating systems can be shared. It’s a bit frustrating when you can’t run a program, yet you can click on it in the directory.
  4. Share your Hard Disk Drive. Essentially, you are cutting the hard drive space in half. If you are partitioning one HDD into two partitions, it is time consuming and once created, quite complicated to increase the size of one partition if necessary. Usually, it’s recommended to just install two hard drives anyway.
  5. User friendly? For the not-so-tech-savy, this phone is not so friendly.

The incident with Huawei is not an isolated one. This past year, Asus also faced pressure from Google and Microsoft to “indefinitely postpone” plans to sell their tablet equipped with Android and Windows OS.

It seems as if the dual-OS concept is being dismantled altogether, especially when it comes to Windows and Android.

What other testing challenges would you have expected from the dual-OS phone? Do you wish it was still being launched? Tell us why in the comment section below.

Categories: Companies

Continuous Performance Testing in the Cloud (French)

Le Continuous Delivery est un des sujets brûlants de l’actualité. Pouvoir livrer une application en continu signifie que tous les processus de livraison ont été automatisés et que les développeurs et les opérationnels peuvent se concentrer sur des tâches à plus forte valeur ajoutée.

Venez découvrir comment mettre en place des tests de performances en continu et un pipeline de Continuous Delivery avec les technologies Open Source Jenkins et JMeter grâce à la plateforme CloudBees et aux services BlazeMeter et NewRelic.

Cette présentation utilisera comme fil conducteur l’application Spring PetClinic.

Date : Lundi 31 mars 2014 19:30
Durée : 1h30
Lieu : Soat - 104, bis rue de Reuilly (voir la carte)Domaine : Java, Continuous Delivery, CloudNiveau : IntermédiaireSpeaker : Cyrille Le Clerc et Yohan BeschiEvenement gratuit
Détails et inscription ici.

 A propos des speakers

Cyrille Le Clerc Architecte Solutions chez CloudBees après 14 années dans les services et le conseil, Cyrille Le Clerc est passionné par le Cloud, la culture DevOps et le Continuous Delivery.
La nuit, Cyrille est Committer sur le projet embedded-jmxtrans.

Yohan BeschiDéveloppeur passionné, Yohan Beschi ne fait quasiment que du Java depuis 2002. Il s’est récemment penché sur le développement Dart, qu’il évangélise depuis.
Il participe à la team d’expertise Soat dans le cadre de ses expériences dans le domaine du web et Java.
Pour suivre ses contributions, cliquez ici.
Categories: Companies

Love or Hate Flash; Here’s How to Use Web Server Content Compression Properly

Are you serving .SWF files from your web server and getting complaints from your end users that your flash app is “just slow?” Or has your Ops team wondered why you see such high web request response times for some of the web service calls executed by your Flash Client? I was just working with […]
Categories: Companies

With growing mobile health application adoption, some software security concerns remain

Kloctalk - Klocwork - Thu, 03/27/2014 - 14:30

Mobile applications are capturing the healthcare industry's attention as one of the most exciting frontiers for new progress in patient care. Yet while mobile health app adoption is expected to increase dramatically in the next few years – industry estimates project that  500 million people will be using mobile health apps by 2015 – there are significant software security concerns. Experts note that federal regulations do not currently govern many aspects of the mobile health sphere. As a result, developers may need to apply internal pressure to ensure security standards are met as they seek broader buy-in for their technologies.

A series of papers published in the journal Health Affairs recently drew attention to some of the privacy and security challenges facing mobile health applications. Broadly speaking, these apps introduce risk in that they transmit sensitive information over a network. Communications via telehealth services are not regulated by the Health Insurance Portability and Accountability Act, researchers noted.

"Although we are unaware of direct harm to patients associated with a security flaw in a telehealth system, there have been academic demonstrations of such problems," researchers Joseph Hall and Deven McGraw wrote in one Health Affairs paper.

Legal and regulatory tangles
One of the major issues is that consumers who use mobile apps in a non-healthcare setting are not covered entities under HIPAA regulations, MedPage Today's David Pittman explained, summarizing another Health Affairs paper. As a result, depending on the circumstances, a mobile health app vendor wouldn't necessarily have to disclose a security breach under current HITECH Act regulations. He noted that consumer device software may contain security flaws and could be a tempting target for hackers, making this regulatory hole a notable problem.

Additionally, patient information handed over to private health apps may not be subject to the protections of the Computer Fraud and Abuse Act of 1986 or the Electronic Communications Privacy Act of 1986, which are designed to prohibit the unauthorized interception of digital information, another Health Affairs article written by Tony Yang and Ross D. Silverman noted. Additionally, liability related to malpractice in scenarios relating to mobile health app data is unclear.

"[T]here is no agreement as to what a doctor's liability would be if he or she injured a patient as the result of faulty or inaccurate information supplied by the patient," Yang wrote.

While researchers urge expanded regulation and greater legal clarity surrounding software security and privacy for mobile health apps, developers may want to be proactive in their implementation of better security practices. Mobile health tools are facilitating homer and remote medical care across a broader swath of people, but buy-in could remain a challenge if the space is seen as unsafe or poorly regulated. While legal discourse is just beginning to emerge in this area, on a software development level the mandate is already clear: Using tools like static analysis software as part of a secure development lifecycle can help avoid the risks endemic to mobile and telehealth applications. Furthermore, such tools can guide developers through FDA standards and compliance as more of a legal framework is introduced.

Software news brought to you by Klocwork Inc., dedicated to helping software developers create better code with every keystroke.

Categories: Companies

Pipeline-Style Deployments Using IBM UrbanCode Deploy

IBM UrbanCode - Release And Deploy - Thu, 03/27/2014 - 12:14

In recent times, a number of organizations have realized the need for deploying their products and services, continuously and actively, into production. The advent of a social-mobile lifestyle coupled with market forces to be at the latest and greatest versions of software is fueling such a developmental and operational mind shift. There is a tremendous amount of pressure on software development and IT functions in organizations for pushing all recent updates, both quickly and accurately, to their systems of engagement and their systems of record. As a result, they are making huge investments into adopting DevOps best practices around continuous integration, continuous deployment & continuous test and are developing what are called delivery pipelines that help them validate changes at multiple, parallel stages of their product release cycles. With this blog post, I would like to showcase one such pipeline that will be created by bringing multiple IBM UrbanCode Deploy environments together.

Background

After spending a number of years at IBM Rational, I decided to move to IBM Cloud Services during the early part of this year. In my most recent incarnation at Rational, I was an Automation Architect and developed a continuous deployment pipeline for Jazz Collaborative Lifecycle Management (CLM) offering. Using this pipeline, we could deploy hot-off-the-stream CLM product builds into 8 different, multi-node environments. Once deployed, we were able to run a series of automated tests against each of these environments and were able to publish their results/statuses into a common work-item based asset for upstream reporting. You can read more about this effort in the following blog series on Jazz.net:

  1. Improving throughput in the deployment pipeline
  2. Behind the scenes of the CLM Continuous Deployment pipeline
  3. CLM Continuous Deployment Pipeline: Reporting on the state of affairs

Work on the CLM pipeline was started in summer of 2012 and the hottest provisioning and orchestration technology at that point was IBM SmartCloud Continuous Delivery (SCD). SCD used a Domain Specific Language (DSL) called Weaver from IBM Research for describing the application being provisioned on one end, and the physical infrastructure pieces, it would be provisioned to, on the other end. To marry the two sides of this world, it used an environment definition that mapped the logical application structure to their physical infrastructure components. Deployment of such an environment was performed using an Rational Team Concert-based build definition that used a command line based mechanism for compiling, packaging, deploying and testing the latest version of an application on pre-determined pieces of infrastructure. The build contained a pointer to the environment definition that was used by the build script at runtime, for assembling knowledge of the application-under-test (CLM) and the infrastructure (Cloud) it needed to be provisioned onto.

Using this mechanism, we were able to build multiple, singular deployable units of the application-under-test in specific environments through clever use of RTC build properties and were able to weave (no-puns-intended) each one of them into a pipeline-style execution model. Overall, life was good until IBM UrbanCode Deploy (UCD) came along and spoiled everything. Things that took months to integrate with nitty-gritty Weaver code, could now be done in just a few clicks using plugin steps and component/application processes from IBM UCD (How boring :-)).

And then my move to IBM Cloud Services happened.

When life came a full circle

Over at IBM Cloud Services, I was assigned a similar objective: Develop a Continuous Deployment pipeline for delivering IBM Cloud Managed Services using DevOps-style Automation. However, this time around, I decided to give a closer look to what IBM UCD had to offer. I chalked out a 6-week Proof-of-Concept (PoC) involving IBM Tivoli Enterprise Monitoring (ITM v6.2.3 FP3) as the application to be provisioned in a 4-machine configuration. For those unfamiliar with ITM, it’s an agent-based solution that allows system administrators to proactively manage the health and availability of their IT infrastructure in an end to end process.

For this PoC, I wanted to create multiple test/production environments containing all ITM components integrated with each other, and offered as a complete monitoring solution. I wanted each of these environments to be provisioned automatically using the same version of the automation code. It is worth noting that in a real world, automatic promotions from one environment to another will typically be gated by a process involving environment-specific tests, followed by verification of test results and approval of the changes by all stakeholders. However, for the purpose of this PoC,  I just wanted to simulate a stripped down DevOps use case where the process would start with an automation developer making changes to some chef cookbook code that installs and configures all ITM components, delivering those changes to a source control stream to be automatically built by RTC Jazz-based build definitions, provisioning a staging environment using IBM UCD and eventually following it by deployment to pre-production and production environments; the latter again using IBM UCD. Given that we had developed this exact use case for Jazz CLM pipeline, the primary focus here was not so much on what release process was followed. Rather, it was on determining how good (or bad) IBM UCD was as an orchestration tool for multi-stage pipeline development.

I started by installing UCD 6.0.1.2 server, UCD agents, RTC 4.0.6 Client/Server, Jazz Build Engines, etc… the whole nine yards on a set of virtual machines. Then I fired up my browser and pointed it to the IBM UCD server to create components, application and my 3 environments – Staging, Pre-Production, and Production. I defined processes for each of the components and the IBM UCD application for ITM. I finally created a small build definition in RTC that could package up my cookbooks into a single zip file and push it into IBM UCD’s CodeStation repository. Each new new version of my “Install Cookbooks” component kicked-off deployment of the ITM application into my Staging environment. Everything worked for individual environments. However, what I eventually wanted was a staged deployment of the ITM application into all environments.

Genesis of an IBM UCD Pipeline

To my surprise there was no out-of-the-box setting to chain multiple application environments together in IBM UCD. However, there was a cool trick that was available as part of the standard IBM UCD plugins. It was called “Request Application Process” and was cataloged under IBM UrbanCode Deploy->Applications plugin steps in the component process editor. Using this plugin step, it was possible to place a request to provision the application in a pre-defined environment. Additionally, this plugin step could be configured to select specific (or latest) versions of the components the application needed in the said environment. In my case, I chose to install the “latest” version of my “Install Cookbooks” component. For the application reference, I used the application name property (${p:application.name}). Another neat trick that IBM UCD allowed was around providing the environment reference. I set a property called NEXT_ENV in each environment and its value was the name of the next environment in the pipeline. So the NEXT_ENV in my Staging environment contained the value “Pre-Production”. Likewise, the same property in Pre-Production environment contained a value of “Production”. The property value for the Production environment was an empty string. To allow for the last environment in the chain to not fail while trying to request deployment of the application in a non-existing environment (due to the empty environment property value for NEXT_ENV), I added a conditional step called “Check if Environment Exists” (cataloged under IBM UrbanCode Deploy->Environment) to check if the environment pointed to by the property existed or not. Overall, here is what the process flow looked like:

Request Next Deployment 300x271 Pipeline Style Deployments Using IBM UrbanCode Deploy

Figure 1: Request Next Deployment

The configuration for each of these steps looked like this:

Check If Environment Exists 300x235 Pipeline Style Deployments Using IBM UrbanCode Deploy

Figure 2: Check if Environment Exists

Invoke Next Stage 264x300 Pipeline Style Deployments Using IBM UrbanCode Deploy

Figure 3: Invoke Next Stage

With this in place, all I had to do was drag the component process onto the application’s process as the final execution step.

Request Next Environment 300x233 Pipeline Style Deployments Using IBM UrbanCode Deploy

Figure 4: Request Next Environment

When all the steps of provisioning a given environment were successful, the application process kicked off the deployment of the next stage of the pipeline and keep going until there were no more environments to be provisioned. What’s more, each invocation of the deployment in an environment kept a bread crumb around for the next environment in the chain. This was captured as a “Details” link in the log of the application process step that requested the next environment. Clicking on the link took the user to the next environment that was provisioned automatically.

Breadcrumbs To Next Deployment1 300x53 Pipeline Style Deployments Using IBM UrbanCode Deploy

Figure 5:  Traceability To Subsequent Environment Deployment

Conclusion I would like to hear if you’ve implemented similar multi-stage deployment pipelines using IBM UrbanCode Deploy. If yes, please share your ideas in the comments section below. Please also share if you find the example described in this blog useful or have suggestions for improvement.   Thank You.
Categories: Companies

Ranorex at Swiss Testing Day 2014 in Zurich

Ranorex - Thu, 03/27/2014 - 11:05
We are just back from successfully participating in "Swiss Testing Day", a software testing event which took place in the Zurich "Kongresshaus" on March 19th, 2014. It was the biggest European software conference ever organized by Testers for Testers.



More than 700 software testing enthusiasts visited this year's Swiss Testing Day. On offer were various lecture tracks and a number of exhibits that focused on core areas such as test management, test methods, test automation and consulting. All in all there were 18 lectures and 2 top-notch keynote speeches that covered topics like "Innovation", "Test Methods", "Practices", "Technical Background" and "Compliance".



We are very pleased to say that several conference attendees used the breaks between the lectures to visit our booth. There we provided testing professionals with an overview of the Ranorex tools and addressed their test automation concerns as well as discussing new ideas about the future of automated testing.



We would thank all our visitors for their high level of interest, we are excited about the many new cooperation opportunities and of course also look forward to continued close cooperation with existing customers and partners.



We are looking forward to welcoming you again at Swiss Testing Day 2015.



You can also meet Ranorex professionals at these upcoming events:
Categories: Companies

The Odd in Ken Dodd

Hiccupps - James Thomas - Thu, 03/27/2014 - 07:55
I'll leave it as an exercise in creative thinking to come up with reasons I might have been buying a Ken Dodd triple album:
That aside, is there a problem here?

The Guardian (amongst others) has been enjoying in-store pricing oddities for a while and, on the face of it, there's something not quite right about this Amazon item either.

But would you simply chuckle and shout bug?

Let's make it an exercise in creative thinking to suggest scenarios where it's reasonable for it to be cheaper to buy the CD and the MP3s than to buy the MP3s alone. Stick 'em in the comments if you like.
Image: https://flic.kr/p/9Xzstx
Categories: Blogs

Community Update 2014-03-26 – Google #oauth #openid endpoints, #sublime, #tdd with Google Spreadsheet, and more

Decaying Code - Maxime Rouiller - Thu, 03/27/2014 - 04:30

So first of all, Google is retiring some of its OAuth/OpenID endpoints. You NEED to read the first article. It will let you know what is going obsolete and when.

Then we have a video worth watching. Doing TDD with Google Spreadsheet. It’s crazy, it’s insane and obviously… I love it!

Beside that, enjoy the read!

Web Development

Migrating to Google+ Sign-In - Google+ Platform — Google Developers (developers.google.com) – Google is shutting down some OAuth/OpenID endpoints. You should update your apps. Some as soon s April 30th.

Sublime is Sublime 10 | Greg Young's Blog on WordPress.com (goodenoughsoftware.net)

Minified.js – A Tiny Alternative To jQuery (www.webresourcesdepot.com)

WebAIM: Accessibility Lipstick on a Usability Pig (webaim.org)

5 Truly Effective CSS Boilerplates and Frameworks (blog.smartbear.com)

ASP.NET

Avoiding problems with relative and absolute URLs in ASP.NET - Fabrice's weblog (weblogs.asp.net)

ASP.NET Web Optimization Framework - CodeProject (www.codeproject.com)

Architecture and Methodology

Using Google Spreadsheet as a simple TDD/BDD environment on Vimeo (vimeo.com) – this is a must watch.

2 Lessons Learned, And 3 Resources For For Learning RabbitMQ On NodeJS (derickbailey.com)

Windows Azure

Caching on Windows Azure - Azure AppFabric Cache, Azure Cache Service, Managed Cache, Dedicated Cache, In-Role Cache, Co-located Cache, Shared Cache, Azure Role-based Cache - Clarifying the naming confusion (blogs.msdn.com)

Storage (SQL, NoSQL, etc.)

Differences in Map/Reduce between RavenDB & MongoDB - Ayende @ Rahien (ayende.com)

Search Engine (Solr, ElasticSearch, etc.)

How we use Elasticsearch to enhance our web products | Browser (www.browserlondon.com)

Elasticsearch.org This Week In Elasticsearch | Blog | Elasticsearch (www.elasticsearch.org)

Categories: Blogs

What the Multi-Screen Shift Means for Media and Entertainment Companies

uTest - Wed, 03/26/2014 - 23:55

Video-streamingMany consumers aren’t watching TV on TV sets anymore – well, at least teens and young twenty-somethings aren’t.

The proliferation of smartphones, tablets and computers has put TV sets to the test. Now, according to Dawn Chmielewski, of Recode, studies show that the TV screen may be falling out of favor for some:

“A new study from Deloitte finds that teens and young twentysomethings spend more time watching movies and television shows on their computers, smartphones and tablets than they do on their TV screens.

“The idea that TV is only watched on a TV isn’t true anymore,” said Gerald Belson, vice chairman of the firm’s U.S. media and entertainment practice.

Deloitte surveyed more than 2,000 U.S. consumers about their media consumption habits and technology use as part of its annual Digital Democracy Survey (PDF).

Although viewing habits have been changing as the number of screens in the typical home multiply, this marks the first time these devices have eclipsed TV for any segment of the population, Belson said.

“It’s an indicator of how the market is reacting to the introduction of technologies,” Belson said. “Clearly, a large segment of the population is quite comfortable using any number of devices to watch content. The speed with which it’s happening takes some people by surprise.”

The TV is still king of the castle in most American homes, with Generation X, Baby Boomers and mature viewers saying they spent the majority of their time watching movies and TV shows on the more familiar living room screen. Even older millennials, those aged 25 to 30, say they tune in to the TV more than half of the time.

The fact that we have some demographics watching television, but not on TV, is significant,” Belson said.

This shift has profound implications for networks, and Nielsen, which are working to find ways to measure TV viewing across multiple screens. Nielsen announced plans to begin incorporating mobile into its traditional ratings with the 2014-15 season.

For media and entertainment companies, the shift puts added pressure on having a quality experience across devices. And that is much easier said than done, especially for entertainment brands. Ensuring a quality experience via television is simple – compared to knowing how your video content loads across devices, in different locations and under real world scenarios. Luckily, in-the-wild testing your video content will help root our these problems before users stumble upon them.

For more resources on in-the-wild testing, download the free whitepaper here>> 

Categories: Companies

ATDD with Behat and Selenium

Testing TV - Wed, 03/26/2014 - 22:05
This talk explores how Behat can help testers in API testing and browser automation with Selenium+Behat. Behat is an open source Behavior Driven Development (BDD) framework for PHP inspired by the Ruby Cucumber BDD framework. Video producer: London Selenium Meetup
Categories: Blogs

Knowledge Sharing

Telerik Test Studio is all-in-one testing solution that makes software testing easy. SpiraTest is the most powerful and affordable test management solution on the market today