Insight 10TM Tops 16 Competitors for Improving Code Security and Reliability
BURLINGTON, MA – November 19, 2013 – Klocwork® Inc., a global leader in software development tools for creating secure code, today announced that RoadTrack, the leading vehicle telematics systems and services company in Latin America, has selected Insight 10TM, Klocwork’s desktop source code analysis and review toolset, to identify and help correct security vulnerabilities and reliability issues in their vehicle automation systems.
RoadTrack works with leading automotive manufacturers to supply OEM and after-market vehicle telematics systems. For example, the company has worked with General Motors to factory-install and activate more than 500,000 systems in Latin America to date, under the ChevyStar brand. RoadTrack’s latest OEM-certified technology, the Platinum 7 Series, provides comprehensive vehicle security, personal safety, vehicle navigation, user communications and infotainment features.
To uphold the highest security and product quality standards in these increasingly connected and complex applications, the company needed to ensure they had a robust static analysis process in place for the range of C/C++ and Java code they are embedding. They evaluated 16 different vendor tools to address that need. In the end, Klocwork Insight’s coverage of security vulnerabilities and code defects was the strongest, easiest to use and most dependable.
“We tested all the tools. Klocwork’s depth of analysis and accuracy in the results was a deciding factor. Their focus on the developer desktop, letting them find and fix issues faster and produce cleaner code from the start, also made the difference.” said Francisco Velastegui, RoadTrack CTO.
“Software developers the world over want tools that fit seamlessly with the way they work, and help them do their jobs faster and more effectively. Industry leading companies in the embedded world are looking for these tools that identify security vulnerabilities and reliability issues, de-risk time-to-market and protect their corporate brands. As shown by RoadTrack, Klocwork’s innovative desktop SCA tools are the best at meeting these requirements” said Mike Laginski, Klocwork CEO. “Our desktop analysis allows engineers to find and fix security vulnerabilities and code defects at their desktop, as the code is being written, not after it is checked in for a build. This was key to our win in another very competitive opportunity.”
RoadTrack works with leading automotive manufacturers to supply OEM and after-market vehicle telematics systems and support services for the English, Spanish, Portuguese and other markets. Based in Ecuador, the company employs more than 1000 systems engineers and call center personnel around the globe, and focuses on the highest product and service quality standards in the design, manufacture and support of its advanced products.
If you guys like it, you might swing me a comment and I’ll make sure to continue. Here’s what I aggregated in the past few days.ASP.NET News
- Announcing release of ASP.NET and Web Tools 2013.1 for Visual Studio 2012 (blogs.msdn.com)
- IamA (we are) Microsoft ASP.NET and Web Tools Team (and Azure) AMA! : IAmA (www.reddit.com)
- Routes, Extensionless Paths and UrlEncoding in ASP.NET - Rick Strahl's Web Log (www.west-wind.com)
- TYPE CAST EXCEPTION | Extending Identity Accounts and Implementing Role-Based Authentication in ASP.NET MVC 5 (typecastexception.com)
- Embedding a simple Username/Password Authorization Server in Web API v2 (http://leastprivilege.com)
- Adding Refresh Tokens to a Web API v2 Authorization Server (http://leastprivilege.com)
- Owin middleware (blog.tomasjansson.com)
Retailers are preparing for the flock of early morning shoppers who will descend on brick and mortar stores on Black Friday – the morning after Thanksgiving. And online retailers are gearing up for Cyber Monday, when their apps will be put to the ultimate eCommerce test. But increasingly these two worlds are merging. A new study shows that more shoppers than ever plan to do their Black Friday bargain hunting online.
From shopping to research, your eCommerce site better be ready for a wave of traffic in the coming weeks. From Internet Retailer:
Black Friday this year promises to be more digital than last year, suggest survey results from Accenture. The management and information technology consulting firm says that 30% of consumers will do most of their day-after-Thanksgiving shopping online this year, up from 25% who said the same in 2012. …
A separate survey from the National Retail Federation finds that nearly half of consumers expect to go online to research gift ideas.
Shopping around for gift ideas won’t be contained to retail sites, though. According to the Nation Retail Federation survey, shoppers will go to different types of online media to find the perfect gift idea.
- 47.9% of consumers will seek out holiday gift ideas online
- 21.5% will use e-mail marketing messages
- 14.0% will use Facebook
- 10.1% will use retailers’ apps
- 7.2% will use Pinterest
And those numbers don’t account for shoppers who will use apps while in physical stores. This year, 55% of Accenture’s survey respondents expect to shop in some form on Black Friday, and 38% plan to shop on Thanksgiving Day itself.
Make sure your retail or eTail app is functional, usable and ready to stand up to the traffic load. (If you’re concerned your retail app isn’t ready for prime time, check out our free resources: Retail App Testing and Optimized eCommerce.)
Part 3 — Part 4 Component-Capable Release Management is Key to DevOps – Part 5 Up Next
DevOps conversations are dominated by release management and production deployment. These are the primary topics at the DevOps conferences that we have attended in Atlanta, New York, Vancouver, Portland, Barcelona and London. This concerns me at some level – if DevOps just becomes a fancy word for IT Ops, then the movement will not be that important – but it’s the reality given the DevOps is a new, immature approach. Not only are the conversations primarily about these topics, many of the discussions are tools or technology related – what CI and CD tools are best? What packaging construct should be used?, etc.
So why are the conversations focused on release management and deployment? Given that DevOps is a reaction to Agile, some say that “DevOps completes Agile”, the first thing that the IT Ops has to do is to keep up with agile delivery – that means deploying small and frequent changes in a repeatable, reliable and predictable fashion. It’s a natural and necessary starting point – if DevOps can’t support this capability, then it will fall all over itself when it tries to move to more strategic topics like incorporating security and compliance efforts into the software lifecycle process.
This is not an easy task since many organizations have diverse and large environments. Given that organizations are trying to deploy more often, approaches that rely on manual intervention just won’t work – so organizations that can automate every single aspect of the release and deployment process can outmaneuver their competitors. This is key – just think of the business advantage that an organization has if it can modify it’s website to react to user behavior, or if it can modify it’s production systems quickly to introduce new products. The impact can be significant – so if you are looking for budget justification for your DevOps efforts; try relating release and deployment speed to business agility.
So what does this all have to do with Sonatype? Well, Sonatype is all about helping people manage and leverage components effectively, including open source components. We know that the average application is now constructed of 80% or more components, so the release management and deployment process has to take this into account. While components help developers construct applications quickly, if your infrastructure isn’t capable of managing components effectively, you will introduce risk into your applications – security, licensing and quality risk. As you think about assembling the right mix of tools to support your build and release process, including Continuous Integration and Continuous Delivery tools, make sure you factor in components.
Here are a few things to consider
- Repository Manager Foundation – Start with a repository manager that can help you store and share your binaries effectively. As you scale your efforts, your repository should scale with you – and it should provide the enterprise class features like security, build promotion and staging, etc., that you need to manage all of your components effectively.
- Policy-based Support for Build Promotion and Staging – Instead of automating the approval process for components, use automated policies that apply your security, licensing and architecture standards to the release management process. The automated policies should provide guidance and enforcement (e.g., stop a build from being deployed) so that you can ensure your production systems are rock solid.
- Factor Components into your CI & CD Initiatives – As organizations leverage continuous integration technologies to automate the build and test process, and to extend automation into the deployment realm with Continuous Delivery approaches, they need to think about the role of components. One way to do this is to integrate your component management and governance approach directly into the build and CI technologies. This allows you to apply your policies and enforce action directly in those tools.
The good thing about this approach is that you end up incorporating other constituents into the process – you may be thinking, “I need to manage the release process. I need to keep up with the Developers. I need to automate building and deploying my VMs.” By leveraging security, licensing and architecture policies in your build and release management process, you are automatically incorporating the security, legal/compliance and architecture teams into the process. That’s a completely natural fit since DevOps is about driving collaboration and communication between constituents involved in the software lifecycle.
We are small, but thinking big. We are looking to extend our platform and add new features for both the community and to our service offering. Thus we are now building a world class and talented development team to Tallinn, Estonia.
If you don’t know much about Estonia, then it’s the place where Skype was born and where you can sign all your documents from mobile phone or vote on elections from your computer. Estonia is an innovative country and has got lots of startup attraction lately. It really is full of young, passionate and smart people which creates a lot of synergy for the next big thing to emerge.
We offer you the possibility to influence all aspects of our platform development in order to make it a world class product. And of course, being part of a world class development team is motivating by itself as it enables lots of learning and growing possibilities.
If you are a kick-ass developer and want to change the world, let us know: email@example.com.
More about our offerings: http://testlio.com/jobs
There are several ways to approach this and using the empirical data from the previous sprint’s actual velocity is a good start. It is up to you to determine what works best for your team and context:
- The simplest way to calculate the team’s target velocity (TTV) is take the previous sprint’s actual velocity and use this as the team’s target velocity. An example if in the previous sprint, the team's actual velocity (TAV) is 120, then for the next sprint, the TTV will be 120.
- Another simple way to calculate the team’s target velocity is take the average of the previous sprint’s actual velocity. As an example, if the previous sprints’ TAVs were 80, 90, 110, 100, and 120, then divide this by 5 and the TTV will be 100.
- A more complicated way is take the previous sprint’s actual velocity, plus credit points for completed effort of unfinished work, minus points that team members will be away on holiday, vacation, training (aka, out of office). As an example, if the previous sprint’s TAV is 120, then take credit for completed effort of unfinished work as 9 (3 points for 3 unfinished user stories), minus 10% of the previous sprint’s TAV due to the Thanksgiving holiday or 12 points, bringing the total to 117 (=120+9-12).
As an aside, when your team is first starting out, you will not have any empirical data or previous actual velocity to work with. In other words, you will have to guess your team's target velocity. However, the good news is that because the sprint is often only 2, 3, or 4 weeks long, your team will not have to wait long to learn the team's actual velocity. This can then be used as empirical data to help you calculate your team's next sprint's target velocity.
In whatever way you decide to calculate the team's target velocity, ensure that the previous sprints’ actual target velocity is applied in some manner to bring as much empirical data into the calculation. Also, attempt to apply a consistent calculation from sprint to sprint. From there, inspect and adapt as appropriate.
In the hope of streamlining account creation e-mail delivery and mailing list moderations, I have deployed SPF and DKIM over the weekend for e-mails coming out of @jenkins-ci.org, which includes account appliations, Confluence, and JIRA.
I've also used this opportunity to switch back the sender of JIRA notifications to firstname.lastname@example.org. It was originally this way, then changed to email@example.com when someone complained (on what ground I do not remember any more.)
To the degree that I have tested the setup, it is working correctly, but if you notice anything strange, please let me know.
I first attempted to use HealthCare.gov to learn about options for covering my granddaughter, who is not covered by my employer-subsidized insurance. I encountered the same kinds of account creation issues others have reported, but I decided to turn on my web browser’s built-in developer tools to see if I might see details as to why form submissions were failing. I quickly discovered that the main browser window would often display a status other than what was actually occurring. For example, the form submission would fail to get a response from the server but the user interface would report that the form was submitted. Once I saw this behavioral mismatch between what was displayed in the browser and what was actually happening, I kept developer tools on as I used the site.
I do not consider using developer tools to watch data moving in and out of my own computer to be “hacking.” I have NOT “hacked” Healthcare.gov. I have only observed what is sent to my computer. I have NOT attempted to gain unauthorized access to Healthcare.gov accounts. Attempting to gain unauthorized access would be both unethical and illegal. Please don't try it.
While watching the interactions between my web browser and the Healthcare.gov servers, I saw information being sent to my computer that likely should not have been sent by the server. After I was told that Healthcare.gov will not take reports of security concerns, I started blogging them.
Then I came across a very serious issue.
I discovered a design defect that subsequently led to me receiving a great deal of media attention. Little did I know that my findings would be mentioned in Wednesday's congressional hearings:
ESHOO: On the issue of security, there was a security breach that arose recently, that I read about at any rate. And what I think is very important here, because the issue of privacy has been raised, and I think that that has been answered. Very importantly, there isn't any health information in these systems. But there is financial information, so my question to you is, has a security wall been built, and are you confident that it is there and that it will actually secure the financial information that applicants have to disclose?
SEBELIUS: Yes, ma'am, I -- I would tell you that there was not a breach, there was a blog by a sort of skilled hacker that if a certain series of incidents occurred, you could possibly get in and obtain somebody's personally identifiable...
(CROSSTALK) ESHOO: But isn't that telling? Isn't that telling?
SEBELIUS: And we immediately corrected that problem, so there wasn't a -- it was a theoretical problem that was immediately fixed. I would tell you we are storing the minimum amount of data, because we think that's very important. The hub is not a data collector. It is actually using data centers at the IRS, at Homeland Security, at Social Security to verify information, but it stores none of that data, so we don't want to be.....
Secretary Sebelius is correct: I did not breach or exploit any of the vulnerabilities that I reported on my blog. And it is nice that she thinks I’m “sort of skilled” as a hacker, when I’m actually a highly-experienced software tester.
I identified a series of steps that could be easily automated to collect usernames, password reset codes, security questions, and email addresses from the system -- without any kind of authentication.
Attackers could use this information to go phishing. Exposing this information gives attackers sufficient information to gain trust and trick people into disclosing their security question answers.
If I were a malicious phisherman, I might send users email that directs them to a site masquerading as HealthCare.gov, and then ask victims to provide their security questions in order to revalidate their account. After collecting this information, I could then reset the password and access information the user provided to HealthCare.gov.
I found this issue last Thursday night (October 24th). I notified HealthCare.gov customer service immediately. The next morning, I found someone who could help pass information about my discovery to people within HHS. CMS patched the most serious hole the same day, and made further changes on Monday before making a public statement about the issue.
“We are eliminating this theoretical vulnerability by preventing users from seeing the specific reset functionality when trying to reset their password... There is no public evidence that these design flaws were ever exploited to compromise user accounts." - Brian Cook, CMS Spokesman
While I am appalled that the issue existed in the first place, I applaud the quick response.
Monday night, after CMS publicly confirmed the fix, I took a quick look at the new "fixed" password reset functionality.
I saw a couple of positive changes:
- The password reset code is no longer returned to the web browser. This closes the biggest hole. Exploiting weaknesses in the password reset system will now require that password reset codes be obtained by intercepting email, or some other mechanism.
- The system asks for security question answers and a new password before submitting the request. This slows down manual security question guessing attempts, but will have little impact on an automated attack.
I also saw that many potential security issues still exist:
- The system still confirms whether a username or email address exists in the error messages returned by the underlying services. Given that these are not public identifiers in the Insurance Marketplace, these should not be revealed. (As of 11/07, this still exists.)
- The system still transmits both the username and password reset code via email. Email is generally not a secure means of communication. A more secure way to do this would be to send the user only half of the equation: the reset code; and then prompt the user for the username after they follow the reset link. (As of 11/08, this still exists.)
- The password reset code still stays the same with each request (and is the same code used to initially activate the account). A more secure way to do this would be to change the code each time a password reset is requested or a password is reset; and in the case that a system contains sensitive information like this one does: put a time limit in which the reset code may be used. (As of 11/13, this appears to have been fixed. Reset codes are changing and old ones don't work.)
- If any of the above (or other) issues lead to a username or password reset code being compromised, the security questions and email address associated with the account can still be retrieved from the system without authorization.
- In the unfortunate event that an account is compromised:
- An attacker can change the email address associated with the account without triggering notification of the email change to the user. Once this is done, other account information can be changed without notifying the owner of the account. (As of 11/05, this still exists.)
- The personal information used to validate a user's identity is returned to the browser each time a user logs into a system. This data is both retained and returned to the browser when it should no longer be needed. Returning it to the browser each time a user logs into the system increases the potential damage should an account be compromised. This data includes the personal information the account owner provided to verify their identity -- sufficient information to steal another’s identify. This also include all information entered into an insurance application for each person on the application (eg: names, DOB, SSNs, disability, pregnancy, finanical) and data retrieved from back-end systems (eg: employer, and income, and last paycheck details). (As of 11/13, the identify verification data is no longer returned to the browser; however, the personal information in the insurance application is still returned to the browser.)
Have you heard?
SLAVITT: Our systems don't hold data. They just transport data through it.
ROGERS: You don't have to hold it to protect it.
Both Secretary Kathleen Sebelius and Andy Slavitt, an executive VP at QSSI (the company tasked with fixing Healthcare.gov) have downplayed security concerns. They have suggested that personal information is not at risk because The Hub, the Healthcare.gov front end, does not store information; but rather, transports information. A system is only as secure as its weakest link. If front-end security is poor, then no amount of back-end security can protect information passing through the front end.
Even if Healthcare.gov doesn't store information, it returns personal information to the browser. As outlined above, the data I once provided to verify my identity is sent to my computer each time I login to Healthcare.gov -- long after the identity verification has been completed. This information includes name, address, date of birth, phone number, and Social Security Number. It also returns data retrieved from back-end systems; including, employer, income, last paycheck details, and more.
Much of the work required to exploit these vulnerabilities can be automated.
Many of these vulnerabilities are rather benign when considered individually. However, they quickly become more serious concerns if we consider how they may be combined and the exploits automated.
I have discovered several additional vulnerabilities while using the site.
- The email validation system demonstrated the same flaw as the password reset system: It returned the activation code (which is the same as the password reset code) to the browser; enabling one to create an account using an email address they do not own. (As of 11/5, this appears to be fixed.)
- My username and questionnaire answers were sent over the Internet without encryption under an error condition that also led to my profile information not being displayed. (As of 11/08, this still exists.)
- The system returned Java stack traces to the browser; potentially revealing information about the internal workings or data of the system that could be exploited to find weaknesses in security. (As of 11/05, this still exists.)
- The security questions ask for things that are likely to be known by one's friends, family, or ex -- and are the sorts of things many will post on Facebook or other social media. (This morning, HHS Secretary Sebelius referred to these questions as "personalized questions that can only be verified by you". They aren't.) (As of 11/08, this still exists.)
We can only speculate at what other security vulnerabilities might be found by someone willing to attempt to gain unauthorized access.
But wait, there's more!
- Healthcare.gov returns the username for an account when given a user's real name and email address. (This appeared to be fixed on 11/06, then reappeared on 11/08. As of 11/10, it appears to have been fixed again.)
- Healthcare.gov returns the security questions for an account when given a username. (As of 11/11, this appears to have been fixed. I last say it on 11/08.)
No other authentication is required. Although this doesn't provide an attacker with the password reset code, it exposes information that should be kept private and provides sufficient information to make phishing relatively easy.
I am very happy that the most egregious issue was immediately fixed.
Others issues remain.
The vulnerabilities I've listed above are defects that should not make it to production. It doesn't take a security expert or “super hacker” to exploit these vulnerabilities.
This is basic web security. Most of these are the kinds of issues that competent web developers try to avoid; and in the rare case that they are created, are usually found by competent testers.
Rather than individual incompetence, these issues might suggest a fractured development team -- where the developers building the components don't know how they are used in the system and therefore do not have sufficient situational awareness to understand the security implications of their decisions. Still, someone has to assemble the components and the system as a whole should be tested for security. Given that I don't know what's going on within the project, I can only speculate. I have; however, seen enough to be concerned.
The volume of users, the nature of the data presumed in the system, and the political attention all contribute to making HealthCare.gov a target of interest to attackers -- of higher interest than the typical web site. This demands a higher standard. This requires that security be made a priority throughout design, implementation, testing, and monitoring of the system.
I am still concerned about Healthcare.gov security. It should concern all of us.
Scaling Agile appears to be a common topic these days. Of course there are good advices and bad advices on how to do that. But how do you know which is which? A few weeks back I came up with the idea of an anti-maturity model. If you have dealt with a few maturity models in the past, these usually run from level 1 to level 5, where level 5 means more mature. My anti-maturity model runs differently, with level 0 indicating that you are probably on the right track, and level MAX_INT that you are probably not doing to well. Why does this scale run differently? As part of my work, I realized there is always someone out there who can come up with an even worse way of doing things than that other guy that I thought was worst.Level 5 – The Brooks
You realize that the constraint of 3-9 team members is artificial. Since you are a large organization, you need to have adaptations, and that includes team sizes of 100 team members. They already work this way for decades. Changing a working team is going interrupt their productivity. That’s not something you want to put at risk just by going to another methodology.Level 4 – The Blue-printers
Scaling Agile is easy at this level. All you need to do is to use Scrum since almost everyone uses it. Then you run your Sprints starting with Sprint 1 – Requirements, Sprint 2 is on Architecture. Sprint 3 is on Design, Sprint 4-5 is on Programming, and Sprint 6 is to find the one to blame. Since you waste a lot of your time doing that, you won’t ship any software at all.
Of course, you can run the sprints in parallel, since you will stick with your requirements department, enterprise architects, and designers.Level 3 – The Conways
Some of your IT teams are using agile. The larger organization though uses proven methods like project plans to adhere to that. Your teams are not cross-functional. Instead, you still have a department in place for all the testing, and you are going to stick with that, since you can’t release the crap the teams build, anyways.
Your architecture in the software maps directly to the organizational structure. You make sure to stick to those structure by introducing additional roles, like a technical counterpart for the business-facing ProductOwner that gives out architecture stories to your teams.Level 2 – The C3POs
You realize that you have a lot of people involved. The concept of Scrum of Scrums is too limited for you to work. Since you have 42 management levels involved, you should go for a Scrum of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums of Scrums, so that all your important people can serve as a ProductOwner of a ProductOwner of a ProductOwner of a ProductOwner of a ProductOwner, and so on. The technical term for that model is Chief ProductOwner, and your most senior one is afterwards called the CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCPO. You have the same structure in your ScrumMaster organization. The top-most person there is called the Master of Desaster.Level 1 – The Downstreamers
You have tiny little teams which work together – your database team creates the database first, so that in the second sprint your business domain team can code the business domain, and your UI guys will be dealing with the user interface in sprint 3. Your ProductOwners are self-organizing, and driven by their own project managers. Your product is driven by three customers, and your ProductOwners deal with the necessary re-prioritzation for that – each on their own.Level 0 – Emergent Scaling
You realize that scaling agile is a complex problem. That means that you need emergent practices, and can’t work from a blue-print or project plan. You realize that you should probe the underlying organization for the changes necessary, and see what happens as you incorporate little experiments while maybe scaling the organization to use Agile all over the place – including accounting, marketing, and sales.
You don’t focus on one agile methodologies or framework, but make local adaptations as necessary to provide the biggest customer benefits all the way through. Your teams deliver business value from end to end, and each of the teams works as their own start-up in the larger organization. Since you have decoupled the whole organization, you don’t need many people while still being able to ship products every second of the day.Conclusion
If you have read thus far, congratulations. If you find there is still something you want to add, please feel free to do so.
In case you want to outline this model to your management, I also came up with a name, an abbreviation for it: ScAAMM.
Perception is Reality for healthcare.gov : The Impact of Latency and Available Bandwidth on Page Load Times
Due to a problem arising from the combination of software and system components in model year 2007-2008 Honda Odyssey minivans built August 6, 2006 through September 8, 2008, the auto manufacturer has issued a recall. In total, 344,187 units are potentially affected.
The flaw arises from the combination of Vehicle Safety Assist System components and software in a way that is unique to these models of Odyssey vehicles, according to the National Highway Traffic Safety Administration. As a result of the error, the VSA system may cause the vehicle to brake hard unexpectedly, without illuminating the brake lights, thereby creating a risk of a crash from behind.
“If calibration of the yaw rate sensor is prohibited when the vehicle starts moving and then the vehicle is driven in a specific manner, the VSA system can build hydraulic pressure in the braking system,” Honda stated in its recall notification. If this occurs without interruption, the pressure may be released into the brake circuit, causing heavy and unexpected braking without the driver pressing on the brake pedal and without illumination of the brake lamps, increasing the risk of a crash.”
Since the error is tied to faulty sensor hardware, the parts to fix the vehicles affected will not be available until 2014. Owners will be notified of the issue and given instructions on how to mitigate the braking problem. Honda has not received any reports of crashes or injuries resulting from the flaw.
For vendors, the recall represents an important reminder that the way third-party hardware components interact with software is key for automotive safety and that software quality needs to be built in to every step of the manufacturing chain. Using tools such as static analysis software, companies can ensure they comply with MISRA standards and other automotive safety and quality requirements.
Software news brought to you by Klocwork Inc., dedicated to helping software developers create better code with every keystroke.
Perception is Reality for healthcare.gov : The Impact of Latency and Available Bandwidth on Page Load Times
As new technologies (hello, smartwatches and Google Glass) emerge onto the app scene and the pre-existing mobile app market further explodes in growth, it’s hardly a surprise that testing teams are scrambling to keep up with the rising costs and complexities of software testing.
In fact, a recent Sogeti survey of software testing/QA professionals confirms this fact – 92% of testing pros surveyed find that the cost and complexity of software testing is on the rise. What’s even worse is that only 22% of teams are ready for the challenges.
One of those biggest challenges, according to Sogeti UK’s CEO Brian Shea, is maintaining quality.
“Quality has never been a higher priority for businesses as they work hard to attract customers in highly competitive markets,” said Shea. “Continual re-evaluation of test process and capability is necessary to ensure the smooth roll out of transformational projects that will ultimately ensure they stay at the top of their game.”
Of course, uTest’s testing model is a textbook opportunity for organizations to re-evaluate test processes and supplement their in-house QA structure with the added benefits of an in-the-wild approach. The quality may be there inside the comfy confines of the organization, but it doesn’t accurately reflect the quality where the company’s users work, live and play.
As to addressing the rising costs, uTest’s all-you-can-test model ensures organizations get maximum value from each testing cycle, keeping costs in check for organizations from start-up to SMB to enterprise.
As organizations struggle to cope with costs and complexities of software testing spiraling out of control, in-the-wild testing may just be worth a look as part of that “continual re-evaluation of test process.”
As agile methodologies continue to grow in popularity in enterprise software development, many testers are unsure of their role in the process. Often it seems like unit testing is done by developers, acceptance testing by the product owner or users, and functional testing not at all. This leave testers wondering if test plans, test cases, UI testing, and defect management have any place at all in software today.
There are still important roles for traditional testers in Agile methodologies, but these roles require a shift in both skills and thought processes. There is no question that testers have to be able to test more quickly; turning around test results within the space of a 2-4 week iteration simply isn’t possible using old techniques. Instead, testers have to leverage automation, both to run existing test cases and to create a regression suite.
Second, testers have to collaborate more closely with developers, in order to understand development decisions and to rapidly create and execute testing strategies. Working together and sharing information is the only way rapid development-test cycles can work.
Last, testers need fast access to information on test results and bug analysis to let the team make more informed decisions on the state of quality and when an iteration can be released. Not knowing the quality state of a release is a prescription for a bad or unusable product.
Telerik Test Studio supports agile by enabling testers to easily automate test cases and schedule those cases to run without having to be physically present. You can also schedule tests to run immediately after a build, so that the team understands the state of the build immediately. This enables faster test execution in a process where time is everything.
Test Studio brings developers and testers closer together, whether using Test Studio standalone or the Visual Studio plug-in. It enables the easy sharing of information within a single environment.
If you haven’t already, try Test Studio risk-free for 30 days. Download the trial at http://www.telerik.com/automated-testing-tools/. You can also request a personal demo at http://www.telerik.com/automated-testing-tools/support/live-demos/personal-demo.aspx.
About the author Peter Varhol
Peter Varhol is an Evangelist for Telerik’s TestStudio. He’s been a software developer and software product manager, technology journalist, and university professor among the many roles in his past, and believes that his best talent is explaining concepts and practices to others. He’s on Twitter at @pvarhol.
Application developers may need to start being more careful with the way they handle consumers’ personal data, according to a recent survey from IT professional organization ISACA. The organization’s 2013 IT Risk/Reward Barometer found that an overwhelming majority of people worldwide have concerns about the way data is gathered and used in the growing sphere of connected devices that make up the Internet of Things, even as their personal privacy practices remain lax. Among Americans, just 1 percent said they would most trust mobile application makers as the institution they most trusted with personal data gathered by Internet of Things devices.
More than nine out of 10 respondents in the survey were concerned about the information collected by Internet-connected devices such as sensors and cameras that the term “Internet of Things” refers to. Although just 6 percent of respondents were familiar with the term Internet of Things, 62 percent reported using GPS systems, 28 percent said they used electronic toll devices in their cars and 20 percent claimed to use smart TVs. For consumers, the greatest concern is that someone will hack into connected devices and steal their personal data, with 31 percent expressing such a fear.
Other concerns about privacy abound as well, with half of respondents saying they felt they had no control over the way websites used their information, and 90 percent saying they were concerned their information would be stolen. Despite these concerns, the majority reuse the same two or three passwords across multiple sites, and 25 percent report not checking the privacy settings on their social media profiles in the last six months. Around eight out of 10 also said they do not always read privacy policies before downloading applications to their tablets or smartphones. Additionally, many are leery of the growing practices revolving around location-based marketing, with almost half saying they would find it invasive if a store sent them an offer via text message as they were walking past.
“People are starting to think through the implications of giving companies this type of information,” Robert Stroud, chair of the COBIT growth task force at ISACA, told IDG News Service.
The takeaway for developers
According to the survey, 99 percent of IT professionals believe the Internet of Things poses governance issues, but 42 percent say the benefits outweigh the risks. Three out of ten say their enterprises have already seen the benefits of greater access to information and 29 percent report improved services. To this group, the biggest concerns for consumers should be not knowing who has access to their information or how it will be used.
Given IT’s stance on the Internet of Things, it’s unlikely that the push toward connected devices will let up. However, the burden will fall on industry professionals to reassure consumers that they do not face the kind of security risks they perceive from such technology and making sure the terms regarding how such tools are used are clear. Actually making software security a priority in development will be a key to winning this battle of perceptions, and developers can use tools such as static analysis software to build more safeguards into their devices. ISACA proposed a framework for enterprises to be agile and seize the advantages of the Internet of Things while also including data governance as a priority.
“Internet-connected devices are already delivering powerful business and lifestyle benefits, but organizations using these need to proceed with transparency and with the consumer at the forefront of their decisions,” said Jeff Spivey, international vice president of ISACA. “The deep concerns about privacy and security uncovered by this year’s IT Risk/Reward Barometer show that enterprises need to establish and openly communicate policies around use of personal data to preserve trust in information.”
Software news brought to you by Klocwork Inc., dedicated to helping software developers create better code with every keystroke.