Skip to content

Feed aggregator

Old Code, New Tests

Testing TV - Mon, 10/06/2014 - 18:17
This presentation explains how Eventbrite has taken a code base, that has been around for quite a while, and built a culture of testing around it. Eventbrite has been around since 2007. At that time there wasn’t much of a testing culture to speak of, but they began testing their code as they simultaneously adopted […]
Categories: Blogs

Where in the World is Jenkins? - October 2014


CloudBees employees travel the world to a lot of interesting events. This month the Bees are busy. If you are attending any of these October events, be sure to connect with us!
  • DevOps & Continuous Delivery Event for IT Executives by Atos – 7th of OctoberThe quote: "You can't succeed in the future with the organization of the past." is the central idea in this seminar meant for IT executives who wish to improve their delivery efficiency and the alignment between their IT delivery and business objectives. More than ever before, development teams must connect with IT operations in a dynamic and ‘continuous delivery’ environment, using the right toolset in order to deliver faster, higher quality software to their business end-users at a lower cost. The Atos DevOps & Continuous Delivery event will present real customer cases and the tools allowing companies to deliver better quality applications faster. For more information and to register click here.

  • DevOps Days Chicago - 7th and 8th of October
    The DevOps days are all about bringing development and operations together. There will be a lot of interesting presentations for instance about how and why not effectively managing an organization's priorities is as damaging to a company's success as not trying to implement a DevOps culture at all. Next to presentations, this event also works with the open space concept. That is the simplest meeting format that could possibly work and is based on (un)common sense of what people do naturally in productive meetings. Prepare to be surprised! Click here for registration and more information.

  • CD Summit Chicago – 15th of October In this seminar for IT executives and technologists, you will learn how to make evolutionary changes to people, processes and technologies to achieve the benefits of continuous delivery and agile. Come join an impressive group of continuous delivery experts to explore how you can increase quality and much more. You'll see how to reduce errors throughout the pipeline and dramatically improve time-to-market for new features and applications with continuous integration with Jenkins. There will be an executive morning consisting of four speakers and a technical afternoon consisting of again four other speakers. For more information and to register, click here.


  • CD Summit San Francisco – 22nd of October This Summit in San Francisco will also be about how to make evolutionary changes to people, processes and technologies to achieve the benefits of continuous delivery and agile. The timetable of speakers will be the same as the Summit in Chicago except for a couple of small changes in speakers, but they will be talking about the same topic. To see the timetable, more information about the speakers and to register click here.


  • Jenkins User Conference - US West (San Francisco) - 23rd of October The Jenkins User Conference (JUC) brings Jenkins experts and community enthusiasts together for an invaluable day of Jenkins-focused learning and networking opportunities. Among other things, you will learn about the latest and greatest Jenkins technology, best practices and plugin development. Jenkins CI is the leading open source continuous integration server. Built with Java, it provides over 961 plugins to support building and testing virtually any project. By attending JUC, you join the community of Jenkins technologists dedicated to expanding their skills and moving the Jenkins platform forward. To buy tickets and for more information click here.

  • IC3: IT Cloud Computing Conference (San Francisco) - 27th and 28th of October 
    "IC3 gives you everything you need to automate IT and DevOps in the cloud". This is in short what the Conference is about. Attendees get vendor-neutral technical content, training and hands-on experience to take the industry (and their careers) to the next level. After you have seen the presentations, you can for instance participate in labs to immediately build what you have learned or network with peers from large enterprise organizations. To register and more information click here.

  • ZendCon 2014 - 27th until 30th of October 
    ZendCon is the place to catch up on news, float new ideas and share coding challenges with developers from around the globe. You can fill your days and evenings with sessions, tutorials, and networking time. There will be three great conference tracks: PHP Best Practices & Tooling, Continuous Delivery & DevOps and Application Architecture - APIs, Mobile, Cloud Services. For much more information about the sessions, tutorials, speakers and to register check out the Zendcon 2014 website.


Categories: Companies

BlazeMeter and Sauce Labs Combine Functional and Load Testing

Software Testing Magazine - Mon, 10/06/2014 - 17:09
BlazeMeter, the leading load and performance testing as a self-service platform for mobile, web and APIs, have announced a new multi-phase technology partnership with Sauce Labs. This technology partnership will enable developers to seamlessly combine both functional and performance testing with BlazeMeter and Sauce Labs platforms, streamlining the testing process.  The announcement cements an already solid relationship between BlazeMeter and Sauce Labs. “BlazeMeter and Sauce Labs are the best-of-breed open source testing vendors,” said Alon Girmonsky, Founder and CEO of BlazeMeter. “JMeter and Selenium are the best open source functional and ...
Categories: Communities

Businesses that ignore open source are missing opportunities

Kloctalk - Klocwork - Mon, 10/06/2014 - 15:00

As open source software moves further into the mainstream, more individuals and businesses are beginning to take note of the potential benefits this approach offers. Companies increasingly realize that open source can deliver superior performance at lower cost, regardless of the organization's size or industry.

For those businesses that continue to ignore open source, the consequences may be significant. As IT Pro Portal reported, a recent survey found that IT professionals widely believed organizations that disregard open source will likely miss out on many opportunities, hurting their ability to compete against more technologically savvy rivals.

Open source opportunities
The survey, conducted by CWJobs, included insight from 300 IT professionals. Among these respondents, 62 percent argued that firms not using open source tools right now are already missing out on business opportunities.

In terms of the specific benefits offered by open source software, 45 percent of these IT experts pointed to the technology's superior flexibility, making this the most popular advantage. Cost savings were next, cited by one-third of participants, the news source reported.

"Businesses must wake up to the benefits of open source and ensure they have the right expertise in place to help realize its full potential," explained Mike Black, sales director at CWJobs, IT Pro Portal reported.

The future of open source
Furthermore, the importance of embracing open source software is growing. The survey found that more than 70 percent of participating IT professionals believe open source will see greater use in the future, the news source noted.

"Open source is being used everywhere; it underpins the Internet, and is arguably the most important enabler of the use of big data in businesses," said Mark Taylor, U.K. director at the Open Source Software Institute, IT Pro Portal reported. "Its importance will continue to grow."

Further highlighting this trend, the source reported that nearly half of the IT professionals who participated in this survey believed there are more open source-specific jobs available today than there were just one year ago. Black noted that a similar percentage of respondents indicated they would take a job with an organization specifically because that company has a strong reputation in regard to open source or demonstrates a willingness to train staff to better use open source.

This suggests that open source's benefits extend beyond the obvious. By embracing open source, firms can also position themselves to attract the most desired, up-and-coming IT talent available. Considering how important IT talent is for achieving success in just about any industry, this makes a willingness to embrace open source an invaluable tool for gaining a competitive edge. On the flip side, failing to deploy and support open source tools may be seen as a deal-breaker for otherwise ideal job candidates.

Making open source work
The growing need to integrate open source solutions into a given business's operations emphasizes the importance of the right set of open source support tools. Without critical supplemental solutions, open source software strategies are unlikely to yield optimal results.

This is perhaps most important in regard to security. As the CWJobs survey found, security concerns were seen by participating IT professionals as the single biggest obstacle to open source adoption, cited by 40 percent of respondents. Businesses require safe open source solutions in order to confidently embrace these tools.

Fortunately, open source security solutions exist that can significantly improve the reliability of a company's code. Automated scanning solutions, for example, can sift through in-use open source code to find potential vulnerabilities before they lead to any complications.

Categories: Companies

Dallas Ebola Patient Sent Home Because of Defect in Software Used by Many Hospitals

Here at Rice Consulting, we have been making the message to heathcare providers for over two years now, that the greatest risk they face is in the integration and testing of workflows to make sure electronic health records are correctly made available to everyone involved in the healthcare delivery process - all the way from patients to nurses, doctors and insurance companies. It is a complex domain and too many heathcare organizations see EHR as just a technical issue that the software vendor will address and test. However, that is not the case as demonstrated by this story.

From the Homeland Security News Wire today, October 6:

"Before Thomas Eric Duncan was placed in isolation for Ebola at Dallas’ Texas Health Presbyterian Hospitalon 28 September, he sought care for fever and abdominal pain three days earlier, but was sent home. During his initial visit to the hospital, Duncan told a nurse that he had recently traveled to West Africa — a sign that should have led hospital staff to test Duncan for Ebola. Instead, Duncan’s travel record was not shared with doctors who examined him later that day. This was the result of a flaw in the way the physician and nursing portions of our electronic health records (EHR). EHR software, used by many hospitals, contains separate workflows for doctors and nurses."

"Before Thomas Eric Duncan was placed in isolation for Ebola at Dallas’ Texas Health Presbyterian Hospital on 28 September, he sought care for fever and abdominal pain three days earlier, but was sent home. During his initial visit to the hospital, Duncan told a nurse that he had recently traveled to West Africa — a sign that should have led hospital staff to test Duncan for Ebola. Instead, Duncan’s travel record was not shared with doctors who examined him later that day.

'Protocols were followed by both the physician and the nurses. However, we have identified a flaw in the way the physician and nursing portions of our electronic health records (EHR) interacted in this specific case,” the hospital wrote in a statement explaining how it managed to release Duncan following his initial visit.

According to NextGov, EHR software used by many hospitals contains separate workflows for doctors and nurses. Patients’ travel history is visible to nurses, but such information 'would not automatically appear in the physician’s standard workflow.' As a result, a doctor treating Duncan would have no reason to suspect Duncan’s illness was related to Ebola.

Roughly 50 percent of U.S. physicians now use EHRs since the Department of Health and Human Services (HHS) began offering incentives for the adoption of digital records. In 2012, former HHS chief Kathleen Sebelius said EHRs 'will lead to more coordination of patient care, reduced medical errors, elimination of duplicate screenings and tests and greater patient engagement in their own care.' Many healthcare security professionals, however, have pointed out that some EHR systems contain loopholes and security gaps that prevent data sharing among healthcare workers.

The New York Times recently reported that several major EHR systems are built to make data sharing between competing EHR systems difficult. Additionally, a 2013 RAND Corporationstudy for the American Medical Association found that doctors felt 'current EHR technology interferes with face-to-face discussions with patients; requires physicians to spend too much time performing clerical work; and degrades the accuracy of medical records by encouraging template-generated doctors’ notes.'

Today, Dallas’s Texas Health Presbyterian Hospital has made patients’ travel history available to both doctors and nurses. It has also modified its EHR system to highlight Ebola-endemic regions in Africa. 'We have made this change to increase the visibility and documentation of the travel question in order to alert all providers. We feel that this change will improve the early identification of patients who may be at risk for communicable diseases, including Ebola,' the hospital noted."

Categories: Blogs

Reminder for November Workshops

Ranorex - Mon, 10/06/2014 - 12:57
We are very pleased to remind you about our upcoming online Ranorex training course, scheduled for this fall.



Get firsthand training with Ranorex professionals and learn how to get the most out of Ranorex Studio at one of these workshops.

Look at the schedules for additional workshops in the next few months.
Categories: Companies

Free Web Load Testing Services

SQA Zone - Mon, 10/06/2014 - 12:56
Thw web is now the dominant platform for software development and thishas fostered the development of load testing services on the web. The Software Testing Magazine website has published article that presents some free offers from commercial we ...
Categories: Communities

Seapine Software, Mechatronic AG to Give Presentation at MedConf 2014

The Seapine View - Mon, 10/06/2014 - 09:00

MedConf 2014Representatives from Seapine Software and Mechatronic AG, a Seapine customer, will be giving a joint presentation at MedConf 2014 in Munich on October 15.

Wendelin Backhaus, Director of Quality Management for Mechatronic AG, will detail how his team uses TestTrack to support CAPAs (Corrective And Preventive Actions). For the past two years, Mechatronic AG has used TestTrack to track CAPAs and link them directly to development artifacts. TestTrack has proven to be a highly flexible, efficient, and expandable system that constantly evolves to meet Mechatronic AG’s changing needs. Backhaus will discuss the benefits his team has gained by adopting TestTrack.

Martin Kochloefl, a software solutions consultant at Seapine, will be on hand to demonstrate TestTrack. On October 16, Martin will discuss the results of Seapine’s 2014 State of Medical Device Development Survey, comparing them to our previous surveys from 2011 and 2013.

Since 2008, MedConf has been a leading conference centered on the software development and system design of medical devices. Several hundred participants attend each year to learn from and network with their peers. Learn more and register to attend at the MedConf 2014 web site.

Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

Migrating to Jenkins Enterprise by CloudBees from Open Source Jenkins

Apparently a few of you have been wondering 'what is involved in moving from Jenkins OSS to CloudBees', and you're in luck: it's super easy! On a scale of difficulty, with 10 being quantum computing and 1 being writing 'hello world' in Python, it's probably a 2 or 3.
SourceXKCD

I won't bore you with too much detail (which can be found here), but if you're interested in making this migration, you have 2 options. The one you pick really depends on you and your needs:

Scenario 1: Your Jenkins version is an LTS version newer than the latest version on this list
In this case, you'll want to install our "Enterprise by CloudBees" plugin. Simply go to your Plugin Manager ("Manage Jenkins" >> "Manage Plugins") and go to the "Available: tab. Install the plugin by checking the box next to the "Enterprise by CloudBees" and selecting the "Install without restart" option.




After installing this "meta-plugin", you'll now need to go to "Manage Jenkins" and select the "Install Jenkins Enterprise by CloudBees" menu option.



You'll now have the option to pick whether you'd like to install all of the plugins packaged as a part of the Jenkins Enterprise by CloudBees offering or whether you'd prefer to just install the license for now. With the latter, you can choose which specific plugins you'd like to install later from your plugin manager.




Regardless of which option you pick, you'll see text updates appear on screen as the required steps are completed (adding the CloudBees update center, installing plugins, etc). 


Afterwards, you'll be prompted to input a valid license to continue.




If you've already purchased a license from someone in sales, simply select the third option ("I already have a  license key") and enter your license key + certificate here. 

If you haven't yet purchased a license, then you'll need to register for an evaluation here by selecting the first option and entering your name + email.

And that's it! Your OSS Jenkins master will now be a Jenkins Enterprise by CloudBees master.

Scenario 2: Your Jenkins version is an LTS version older than the latest version on this list

You can either upgrade using the meta-plugin outlined in scenario 1, or you can install the Jenkins Enterprise by CloudBees WAR with a compatible version number to the LTS version you have now - for example, you'd pick the Jenkins Enterprise 1.554 WAR if you're running OSS Jenkins 1.554.

Once you download the WAR, you would just need set its JENKINS_HOME to be the same as the JENKINS_HOME your OSS Jenkins is currently working from and then run it.

Once you run the WAR, any plugins in your existing OSS installation's JENKINS_HOME will be updated to the version bundled with the Jenkins Enterprise by CloudBees WAR.

Alternative installation options
  • openSUSE users can run:

sudo zypper addrepo http://nectar-downloads.cloudbees.com/nectar/opensuse/ jenkins
followed by
sudo zypper install jenkins
  • Red Hat/Fedora/CentOS users can download an RPM package by adding the key to their system:
sudo rpm --import http://nectar-downloads.cloudbees.com/nectar/rpm/jenkins-ci.org.key
Then adding the repository:
sudo wget -O /etc/yum.repos.d/jenkins.repo http://nectar-downloads.cloudbees.com/nectar/rpm/jenkins.repo
Then installing Jenkins Enterprise by CloudBees:
sudo yum update
sudo yum install jenkins
  • Ubuntu/Debian users can install Jenkins Enterprise as a Debian package by adding the keys to their system:
wget -q -O - http://nectar-downloads.cloudbees.com/nectar/debian/jenkins-ci.org.key | sudo apt-key add -
Then adding the repository. If you have already added open-source Hudson/Jenkins as a repository, be sure to remove it to avoid Jenkins Enterprise by CloudBees from being overwritten by them:
echo deb http://nectar-downloads.cloudbees.com/nectar/debian binary/ | 
sudo tee /etc/apt/sources.list.d/jenkins.list
Then installing Jenkins Enterprise by CloudBees:
sudo apt-get update
sudo apt-get install jenkins
  • Windows users can download a ZIP file from here and execute the setup program inside, then access their instance at http://localhost:8080.

Scenario 3: Your Jenkins version is an LTS version older than 1 year OR is not an LTS version

You'll need to upgrade to an LTS version that is less than a year old and then follow either of the above instructions.
Categories: Companies

Latest Testing in the Pub Podcast: Software Testing Hiring and Careers

uTest - Fri, 10/03/2014 - 17:30

Testing in the PubIf you weren’t aware, software testing and quality assurance engineers are perennially ranked amongst the happiest jobs, and this was no different in 2014.

It’s thus a good time to be in demand as a software tester with all of that love and happiness awaiting you in your next job. With that, uTest contributor Stephen Janaway’s latest Testing in the Pub podcast takes on the topic of hiring testers, and software testing recruitment. You’ll hear about what test managers need to look out for when recruiting testers, and what testers need to do when seeking out a new role in the testing industry.

Part I of the two-part podcast is available right here for download and streaming, and is also available on YouTube and iTunes. Be sure to check out the entire back catalog of the series as well, and Stephen’s recent interview with uTest.

Categories: Companies

GSA plans to prioritize open source for future IT needs

Kloctalk - Klocwork - Fri, 10/03/2014 - 15:00

As the advantages offered by open source software strategies become increasingly clear, many organizations in the public sector are turning to this approach. In terms of both performance and cost-effectiveness, open source is often the ideal choice for government agencies.

The latest department to follow this line of thought is the General Services Administration. As FedScoop reported, the GSA recently announced a new policy which will require the priority consideration of open source options whenever the agency begins to develop a new IT project.

Open source for IT
The news source noted that this is a somewhat controversial decision. Some observers question whether open source software solutions will be able to meet the GSA’s IT needs to the same degree as proprietary software. However, Sonny Hashmi, chief information officer for the GSA, is confident that looking to open source first is the best option for the agency.

“During the process of vetting new software, GSA plans to implement a process where open source software is considered within the ranks of conventional software,” Hashmi told the news source. “We are confident that our vetting process will identify the best software for each IT solution based on the merits of the software, while also factoring in cost, support, security and a myriad of other factors.”

Looking for government solutions
FedScoop reported that the GSA will specifically look at other federal agencies for inspiration as to how to best implement open source solutions. Hashmi pointed to the Food and Drug Administration’s openFDA as a key example of a successful open source implementation that yielded positive results.

“When the Food and Drug Administration built out openFDA, an API that lets you query adverse drug events, they did so in the open,” Hashmi said, according to the news source. “Because the source code was being published online to the public, a volunteer was able to review the code and find an issue. The volunteer not only identified the issue, but provided a solution to the team that was accepted as a part of the final product.”

Hashmi went on to argue that all solutions created using taxpayer dollars should be open source, so the general public can benefit from these developments.

Open source benefits
Gunnar Hellekson, chief technology strategist for the U.S. public sector for Red Hat, argued that the GSA’s adoption of an open source-first policy will yield tremendous benefits for the agency.

“You use open source because it can be cheaper, easier to procure, more flexible, and gives you access to a community of developers and users that’s rare with proprietary software,” said Hellekson, the news source reported. “This kind of policy is already the de facto standard in the commercial world, and for good reason: Open source often provides more options, more innovation and better software for less money.”

Flexibility advantages
Additionally, it is important to note that other government organizations have turned to open source solutions not just for efficiency and cost-savings, but also to achieve superior control over their software implementations. After all, proprietary solutions are by nature far less flexible than open source code. Many proprietary software developers demand that clients, including public sector organizations, agree to rigid contracts, which can further limit expansion and evolution over time. With open source, agencies can enjoy a much greater degree of freedom, which is critical for fast-changing IT environments.

By embracing open source software at an accelerating degree, many government agencies are positioning themselves to respond more quickly and effectively to the country’s needs in the coming years.

Categories: Companies

The Power of the Team

NCover - Code Coverage for .NET Developers - Fri, 10/03/2014 - 13:07

power_of_the_teamThey say no man is an island. We are not meant to be alone. While sometimes we may wish the opposite, as part of a programming team, we do rely on each other to build and deploy quality applications. And as we know, quality code truly is a team effort.

So how do we know that our code is good? How can we know with any certainty that we have squashed any bugs before deploy?

We know well written code and well tested code is the result of well managed code. Code coverage helps unite all those efforts. It is an essential tool for identifying where you can improve your tests, and your code, as early in the development cycle as possible. Effective tests are critical for delivering high quality code, reducing risks and maintenance costs, and increasing customer satisfaction.

For example, code coverage provides your team with both the big picture and the small details. Managers are able to see project overviews at-a-glance, see team wide trends and can drill down into the details when needed. Developers and QA members can log-in to see the results of their individual efforts as well as their team’s overall efforts are trending. This serves as the hub for the team’s code coverage, aggregating coverage across projects, teams and, if desired, an entire organization.

Adding in a layer of transparency and accountability that all members of the team can see will help improve the quality of code. Need to see it to believe it? Request an online overview from us and we can walk you through the benefits.

The post The Power of the Team appeared first on NCover.

Categories: Companies

Real browsers vs virtual browsers: which to choose for your testing?

Web Performance Center Reports - Thu, 10/02/2014 - 22:27
Since version 6, Web Performance Tester has supported two different ways to simulate user behavior on a website for testing: real browsers and virtual browsers. These two methods take very different approaches to the problem and each has different advantages and disadvantages. Those are not always obvious at first glance, so I’d like to run through the key difference to help you decide. But first, a brief description of the two approaches: Real browsers – When using the real-browser approach, the test is defined in terms of the actions the that a human would take in the browser in order to complete … Continue reading »Related Posts:
Categories: Companies

A Separation of Testing and Product: Should Testers ‘Care’?

uTest - Thu, 10/02/2014 - 20:40

As a developer, it’s easy to care about that new app you’ve just created. Your new “baby” is taking off, being downloaded by millions of users all over hero_test_102011the world — and it’s your brainchild, one that you’ve poured your blood, sweat and tears into.

But for those testing that app — they may want to do a good job in ensuring the app is successful, but do they actually have an emotional stake in the product itself? The answer to that isn’t as clear, and it’s something that was recently discussed in a great uTest Forums discussion.

According to one of our testers, in one experience at their job, it was pretty easy not to care about the product — it was out of necessity:

At the company I worked for, only 2 or 3 people actually knew we were working for Apple. All we knew and cared about was our assigned tasks. We didn’t know about the underlying product and it was important that we didn’t care. If we cared enough, we could probably investigate, ask questions, do research and figure out who the customer was. It would be terrible for our company if our customer’s identity was exposed so we were INSTRUCTED to not care about the customer. All we needed to do was ensure our product matched our customer’s specifications.

Another tester drew a parallel to illustrate how easy it is to separate feelings from doing your job — how you don’t need passion for the “subject” to be successful:

Do you think that surgeons are empathetic with their patients? Most of the top-notch surgeons have something that is called compassion fatigue, they are simply overloaded with patients that need care and most of them simply stop creating any emotional attachment to the patient. Yet they do a very good job saving lives, don’t they?

But don’t testers still have some ultimate stake in the success of the product, therefore making it tough to not have some sort of emotional attachment to the creative or development process?:

A surgeon may not emotionally care that the person they are operating on doesn’t make it. They will care if everyone they operate on doesn’t make it. They would be out of a job if they weren’t successful. They care about what other surgeons think of their skill. That example would be more geared toward the developer role. Testers don’t actually create. The guy who monitors the surgeon’s performance would probably care and would be closer to the tester role. It would reflect negatively on the monitor if their surgeon was not doing a good job.

It certainly would make sense that testers wouldn’t have an emotional attachment to the product given that it is not their “baby.” It could also be one of the reasons behind the pervasive industry problem that there aren’t enough testers willing to challenge themselves as thinkers, or the status quo. For example, why would a tester care enough to grow in their craft if they’re working on someone else’s creation versus creating themselves?

But this is all speculation designed to spark worthy discussion, as with our original Forums discussion. We leave it to you, the testing audience – should testers care about the product that they are testing? Should they have an emotional stake in something that isn’t theirs? We’d love to hear your thoughts in the Comments below.

Categories: Companies

Insurers Can Lower Their Risk with Better Software Testing

skydiveStorm clouds are gathering on the horizon, but the risk to insurers is not property damage.

Three major technology trends are converging on the insurance industry and could spell crisis for corporate governance and regulatory compliance.

The trends in question are consumerization, shadow IT and digital transformation. Significantly, they marry up with three issues that are specific to the insurance industry. These are modernization, regulatory uncertainty and operational efficiency.

Here’s how the trends and issues align:

  • Modernization and Consumerization – Few insiders would disagree that the legacy platforms of most insurance organizations hamper operational efficiency and are a few steps behind the times. Thus, calls for modernization resound through the industry. The rise of mobile computing and cloud services compounds the issue, by creating what PwC calls in its recent “Top Issues” report a “culture of consumerization within the enterprise — having what you want, when you want it, the way you want it.”  How can insurance companies modernize in a way that encompasses the flexibility of consumerization without jeopardizing good corporate governance?
  • Regulation and Shadow IT – According to analyst services such as The Wall Street Transcript, the impending Common Framework for Supervision of Internationally Active Insurance Groups (ComFrame) and other group-level supervision initiatives promise to influence regulatory change and policy worldwide and, therefore, affect virtually every insurer. Meanwhile, teams and individuals at insurers around the world are subscribing directly to cloud services for business reasons in record numbers. Pundits call the phenomenon “shadow IT” because these cloud services are operating outside the boundaries of the company. How can insurers ensure regulatory compliance at a time when employees are using more and more systems outside those provided by formal IT?
  • Operational Efficiency and Digital Transformation – According to our research, there will be unprecedented investment by insurance carriers in policy administration systems (PAS) over the next 24 months. This is driven by three factors – the need to improve operational efficiency, responsiveness to market demands and a maturing vendor landscape. The number one aspect of transformation must be the customer and agent experience. Of course, insurance isn’t the first industry dependent on quality consumer interaction to recognize this imperative. As reported by Software Quality Matters last month, what pundits call “digital transformation” is overtaking industries such as retail, where new versions of e-commerce portals and mobile apps roll out on virtually a daily basis. How can insurance companies not only overhaul their PAS but update them at the pace of digital transformation that customers worldwide have come to expect?

There’s no single answer to resolving the challenge of ensuring good governance and compliance in a fast-changing technology environment but in its report PwC states better software testing is one way insurance companies can take control. We would agree.

By deploying the right testing technology insurers can comprehensively document all systems changes as they are tested, which makes it dramatically easier to demonstrate compliance and provide process visibility. In most cases today, much of the testing of systems is carried out manually by users, so in order to minimize the disruption that could result from frequent system updates it is critical to ensure that testing is as automated as possible. Given the non-technical nature of the people involved in testing, this means the technology must be highly easy to use, avoiding such things as the requirement to write and maintain code. Only then can insurers innovate fast enough yet safely enough.

Better testing can’t break up the darkening clouds, but it can provide insurers with shelter from the storm.

The post Insurers Can Lower Their Risk with Better Software Testing appeared first on Original Software.

Categories: Companies

Better Software Testing Can Quell the “Omnichannel Frenzy” for Retailers

Retail ShoppingIn a commentary for Retail Gazette earlier this year, guest columnist Kate Barron wrote that the “growth of the internet, social media and mobile technology” has driven retailers into “omnichannel frenzy.” Further, retail professor at Vlerick Business School, Gino Van Ossel, declared that retailers who fail to adopt an omnichannel approach are “digging their own graves.”

These are not just provocative statements but a fast-approaching reality. Consider these findings from a recent Accenture study:

  • 71 percent of consumers said seeing information on in-store products on a mobile device is “important” or “very important.”
  • 39 percent believe that they are “unlikely” or “very unlikely” to visit a physical store if a retailer doesn’t provide relevant product information on a mobile website.

Connect these two dots and the picture of the future that’s drawn is clear: Retailers without effective omnichannel strategies – defined simply as allowing shoppers to move freely across the spectrum of channels from digital to physical and back again – will lose customers.

And if recent statistics about the growth of e-commerce and mobile retailing are any indication, the exodus could be swift, as shoppers move at the speed of a click of the mouse and swipe of a finger. As reported by Computerworld, Nielsen’s Global E-Commerce Survey found the volume of people planning to shop online has doubled in the last few years. And according to several studies by the multinational consulting firm Deloitte Digital, in major world retail markets sales initiated by smartphones alone currently stand at $40 billion, and mobile-influenced transactions account for $593 billion in sales.

Our own research confirms the pursuit of omnichannel capability weighs heavy on the minds of UK retailers. In July, we commissioned Martec International, a specialist retail consulting company, to interview 40 large UK retailers. Executives involved in systems testing and e-commerce responded, hailing from companies with sales totaling £147 billion and representing 46 percent share of the total UK retail market. Among Martec’s findings are:

  • Ominichannel is driving 71 percent of retailers to deliver major software projects in the next 18 months.
  • Maintaining systems and data at a high level of quality in the face of frequent change is a challenge.
  • 69 percent of retailers reported suffering problems because of software bugs, with e-commerce systems top of the list because of the “numerous different devices, browsers and operating systems that customers use to access a retailer’s web site.”

In our minds, these conclusions indicate the quality of retail applications and e-commerce content is fated to run afoul of the omnichannel fervor. Why? Because the two biggest problems executives in our study reported were a “lack of resourcing” and a “lack of automation” for systems testing.

Here’s how the General Manager of IT at a grocery retailer cast the issue: “We don’t have anything to assist the testing process and the volume and complexity of testing just goes up all the time with all the promotions and product offers we have.”

And the Testing Manager of a supermarket retailer said: “It is hard to meet the deadlines set for projects when there is such a quick turnaround expected.”

Trouble is most testing technology is a poor fit for today’s retailer. Not only do systems struggle to keep pace with fast-changing environments – i.e., e-commerce websites – but it’s also too technical for business people, who perform most of the testing on packaged applications. But there exists a new breed of testing technology, used by the likes of Arcadia, which addresses these challenges while speeding up testing 3-4 times.

A cost-effective investment in software testing could lead to bringing higher quality customer-facing systems to market sooner – and, in turn, these higher quality systems could create the omnichannel experience that keeps the shopper in a retailer’s fold.

The post Better Software Testing Can Quell the “Omnichannel Frenzy” for Retailers appeared first on Original Software.

Categories: Companies

Debug real-browser tests with breakpoints in QA Tester and Load Tester

Web Performance Center Reports - Thu, 10/02/2014 - 15:56
Since the first release of real-browser support, it has been possible to pause a testcase replay using the pause button. If you need to stop in the middle of a long testcase, however, it can inconvenient to sit and wait for the important part. Web Performance Tester™ (WPT) now supports breakpoints in real-browser testcases. To set or clear a breakpoint, select the step and choose “Toggle Breakpoint(s)” from the pop-up menu. The breakpoint will be indicated with a matching pause icon on the step. During interactive replays, the virtual user will pause when it reaches any step with a breakpoint. … Continue reading »Related Posts:
Categories: Companies

Web and App Server Monitoring Basics: Trending Transaction Performance and Throughput

On our about:performance blog we talk a lot about problem patterns such as too many database statements, wasteful memory management leading too much garbage collection, web performance worst practices or performance of cloud and virtualized environments. I was recently intrigued by a couple of screenshots my colleague Reinhard showed me which give a perfect overview […]

The post Web and App Server Monitoring Basics: Trending Transaction Performance and Throughput appeared first on Compuware APM Blog.

Categories: Companies

Telerik Will Be at Testathon, a One-of-a-Kind Hackathon for Testers, This Saturday in San Francisco

Telerik TestStudio - Thu, 10/02/2014 - 15:22
We will be mobile testing this Saturday, October 4, in San Francisco at the Testathon, a unique series of testing hackathons gathering 50 of the world’s top testers for a day of exhaustive testing, learning, networking and fun.
Categories: Companies

NServiceBus 5.0 behaviors in action: routing slips

Jimmy Bogard - Thu, 10/02/2014 - 14:57

I’ve wrote in the past how routing slips can provide a nice alternative to NServiceBus sagas, using a stateless, upfront approach. In NServiceBus 4.x, it was quite clunky to actually implement them. I had to plug in to two interfaces that didn’t really apply to routing slips, only because those were the important points in the pipeline to get the correct behavior.

In NServiceBus 5, these behaviors are much easier to build, because of the new behavior pipeline features. Behaviors in NServiceBus are similar to HttpHandlers, or koa.js callbacks, in which form a series of nested wrappers around inner behaviors in a sort of Russian doll model. It’s an extremely popular model, and most modern web frameworks include some form of it (Web API filters, node, Fubu MVC behaviors, etc.)

Behaviors in NServiceBus are applied to two distinct contexts: incoming messages and outgoing messages. Contexts are represented by context objects, allowing you to get access to information about the current context instead of doing things like dependency injection to do so.

In converting the route supervisor in my routing slips implementation, I greatly simplified the whole thing, and got rid of quite a bit of cruft.

Creating the behavior

To first create my behavior, I need to create an implementation of an IBehavior interface with the context I’m interested in:

public class RouteSupervisor
    : IBehavior<IncomingContext> {
    
    public void Invoke(IncomingContext context, Action next) {
        next();
    }
}

Next, I need to fill in the behavior of my invocation. I need to detect if the current request has a routing slip, and if so, perform the operation of routing to the next step. I’ve already built a component to manage this logic, so I just need to add it as a dependency:

private readonly IRouter _router;

public RouteSupervisor(IRouter router)
{
    _router = router;
}

Then in my Invoke call:

public void Invoke(IncomingContext context, Action next)
{
    string routingSlipJson;

    if (context.IncomingLogicalMessage.Headers.TryGetValue(Router.RoutingSlipHeaderKey, out routingSlipJson))
    {
        var routingSlip = JsonConvert.DeserializeObject<RoutingSlip>(routingSlipJson);

        context.Set(routingSlip);

        next();

        _router.SendToNextStep(routingSlip);
    }
    else
    {
        next();
    }
}

I first pull out the routing slip from the headers. But this time, I can just use the context to do so, NServiceBus manages everything related to the context of handling a message in that object.

If I don’t find the header for the routing slip, I can just call the next behavior. Otherwise, I deserialize the routing slip from JSON, and set this value in the context. I do this so that a handler can access the routing slip and attach additional contextual values.

Next, I call the next action (next()), and finally, I send the current message to the next step.

With my behavior created, I now need to register my step.

Registering the new behavior

Since I have now a pipeline of behavior, I need to tell NServiceBus when to invoke my behavior. I do so by first creating a class that represents the information on how to register this step:

public class Registration : RegisterStep
{
    public Registration()
        : base(
            "RoutingSlipBehavior", typeof (RouteSupervisor),
            "Unpacks routing slip and forwards message to next destination")
    {
        InsertBefore(WellKnownStep.LoadHandlers);
    }
}

I tell NServiceBus to insert this step before a well-known step, of loading handlers. I (actually Andreas) picked this point in the pipeline because in doing so, I can modify the services injected into my step. This last piece is configuring and turning on my behavior:

public static BusConfiguration RoutingSlips(this BusConfiguration configure)
{
    configure.RegisterComponents(cfg =>
    {
        cfg.ConfigureComponent<Router>(DependencyLifecycle.SingleInstance);
        cfg.ConfigureComponent(b => 
            b.Build<PipelineExecutor>()
                .CurrentContext
                .Get<RoutingSlip>(),
           DependencyLifecycle.InstancePerCall);
    });
    configure.Pipeline.Register<RouteSupervisor.Registration>();

    return configure;
}

I register the Router component, and next the current routing slip. The routing slip instance is pulled from the current context’s routing slip – what I inserted into the context in the previous step.

Finally, I register the route supervisor into the pipeline. With the current routing slip registered as a component, I can allow handlers to access the routing slip and add attachment for subsequent steps:

public RoutingSlip RoutingSlip { get; set; }

public void Handle(SequentialProcess message)
{
    // Do other work

    RoutingSlip.Attachments["Foo"] = "Bar";
}

With the new pipeline behaviors in place, I was able to remove quite a few hacks to get routing slips to work. Building and registering this new behavior was simple and straightforward, a testament to the design benefits of a behavior pipeline.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today