Community Update (2013/12/06) – Profiler or debugger for slow pages, WebAPI 2 output caching and secure OWIN WebAPI
Not a lot of content as for every Friday, but we got a few that is very interesting.
Profiler or debugger, output caching or per-controller configuration… it’s all bellow. Do not miss the last one on securing ASP.NET Web API with Windows Azure Active Directory with OWIN.
I wish you all a good weekend..NET
There is a cool initiative going on from the folks at Tea Time With Testers and QA Intelligence. They’re launching a survey to help us understand the state of testing! What are the main challenges in the profession? Where is the profession headed? I’m definitely supporting the initiative and interested in the results! I’m following the software testing trends via analysts but a survey coming from the community could be quite insightful!
Note: I have no affiliation with the survey.Get Shareaholic
As software applications develop simpler user interfaces and infiltrate more areas of everyday life, many young people are being drawn into programming by tech’s fundamental allure. But the streamlined nature of many popular applications may also be changing the way young developers think about building programs, teaching them to focus on basic functionality and immediate solutions rather than on scalable, thought-out design. In a recent column for SD Times, software consultant Anthony Hooper described this phenomenon as “Tech ADD,” suggesting that enterprise development shops may need to help their young hires reorient toward more granular, slower practices.
The issue, Hooper explained, is that many young developers know how to combine existing tools or write lines of code that make machines do what they want, but these slapdash approaches are designed for quick construction, not robust applications. Even simple consumer entertainment apps face challenges like handling loads of hundreds of thousands of users at once, and enterprise apps can involve even more complex needs. To achieve this durability, young developers may have to focus on one set of code for a long time and work on a more basic, lower level than they are accustomed to.
“Their idea of ‘programming’ often involves just throwing together various plug-ins, frameworks, libraries and, in some extreme cases, a few bits of code that are already floating around in the public domain,” Hooper wrote of young programmers. “But this can easily lead to a common software design pitfall we call ‘the big ball of mud.’ This is a program so tightly coupled that it’s very difficult to scale or customize, and it’s rarely robust enough to handle large volumes of users.”
Building on a more basic level
To achieve the kind of application flexibility needed, younger developers may have to learn to code in a lower-level language like C++, Hooper suggested. At the same time, they need the “mental agility” to transition to a Web developer mindset and leverage the Web application skills needed to build a large-scale application. As programmers learn to balance multiple languages, there is an inevitable learning curve, however.
“C++ is very, very deep,” John Sonmez, a developer, blogger and founder of Simple Programmer, wrote in a blog post dissecting the role of C++. “There is just a very large amount to know about the language itself. C# and Java development are somewhat about learning the language, but much more about learning the libraries. C++ development is more about learning every nook and cranny of the language.”
The complexity of C++ makes it unsuitable for many high-level projects, Sonmez went on to argue. Additionally, it creates a challenge for young developers being brought in and taught slower, more low-level approaches that use the language. For companies that need to work with budding talent to build this skill, some number of C++ mistakes is inevitable. To mitigate these errors and help developers understand their mistakes as they build their code, companies can use static analysis software. By catching logical errors and immediately letting the developer see those problems, static analysis provides the perfect tool for teaching as programmers go along. For companies that need to train their young developers to look beyond simple, already built components and to develop custom tools themselves, static analysis is an ideal educational companion.
Software news brought to you by Klocwork Inc., dedicated to helping software developers create better code with every keystroke.
You may have noticed that not many people know how the websites, software and apps they use everyday actually came into existence. They know that a developer built the app, but, as Lorinda Brandon mentioned in a recent column, the concept of software testing seems to leave them stumped. I’m not even a tester but I’ve noticed this lack of understanding whenever I tell people I work for a software testings company – they have no idea what that is or what we do. Once I explain it seems to make sense to them, but the overall concept of software testing just doesn’t seem to be in the main stream consciousness. Until now. As Lorinda pointed out in her column, the national discussion around the launch of healthcare.gov propelled testing and QA into the spotlight.
While several “software glitches” have been featured on the evening news, I can’t recall any that have caused a national conversation about the process of building and testing software until the Healthcare.gov debacle. Suddenly, Americans are sitting at their kitchen tables – in suburbs, in cities, on farms – and talking about quality issues with a website and who might be at fault.
The average American was given nightly tutorials on load testing and performance bottlenecks when the site first launched, then crumbled moments later. We talked about whether the requirements were well-defined and the project schedule reasonably laid out; we talked about who owns the decision to launch and whether they were keeping appropriate track of milestones and iterations. After that came the public discussions about security holes, which is not an unfamiliar concept to most people. But with those discussions came a healthy dose of encrypted passwords, third-party information sharing, and authentication protocols. School children and grandparents alike are worried about whether their passwords are being passed in the clear now. Imagine. There was even a major congressional hearing about the site, much of which focused on whether it was tested well enough.
It got really interesting when the media went from talking about the issues in the website to the process used to build the website. This is when software testers stepped out of the cube farm behind the coffee station and into the public limelight. Who were these people – and were they incompetent or mistreated? Did the project leaders not allocate enough time for testing? Did they allocate time for testing but not time to react to the testing outcome? Did the testers run inadequate tests? Were there not enough testers? Did they not speak up about the issues? If they did, were they not forceful enough?
So rejoice you software testers, QA specialists and quality evangelists – the world (or at least the US) now knows what you do!
I am also sure that we testers share many of the same issues, questions and even frustrations around our working environment and our day-to-day tasks.
Is there something we can learn from these shared challenges?
How can we forge a better future for all of us by understanding more deeply what is happening to our profession?
With these questions in mind I got in touch with my friends from Tea Time with Testers, and together we decided to launch a survey about the current reality and the challenges faced by testers and QA professional around the world.
This is “The State of Testing“ survey.Go on and participate!
The survey is now open, you can reach it from its page in the QABlog. Our plan is to run it for the next 10 days, until the end of Monday Dec 16th.
Go ahead and participate, it will take you less than 10 minutes and you will be helping all the whole testing community by providing more information about our current reality and challenges.How can you help?!
Did I say it starts by filling out the survey?
So go ahead!
In addition to that, we need as many additional testers to do the same and we could use your help us to spread the word.
You can help by posting a blog that points to the survey, by tweeting about it, by posting about it in your Linkedin, Google+ or Facebook account, by “talking” about it on any local or international testing forum you know, and even by telling your testing co-workers and friends.
You can see that we already started adding the names of collaborating bloggers in the survey’s page, and we will also send them/you the results before we make them publicly available so that you can also blog about them.
The more answers we have, the more insightful the information we will get from the survey.Stay tuned…
I am sure that we will all be surprised
Developers have become savvier about incorporating software security into their development process earlier in product lifecycles, and many have effectively used tools like static analysis software to eradicate vulnerabilities from their own code. However, most projects incorporate some amount of open source code, and these libraries can introduce vulnerabilities if not carefully monitored. While open source code is generally secure, companies have a responsibility to ensure they are using the most up-to-date versions and looking for vulnerabilities regardless in their products.
According to a recent study from White Source Software that looked at around 3,000 commercial software projects, 23 percent of programs contain open source code with known vulnerabilities. In general, the problem stems from inconsistent updates: Of those vulnerable open source libraries, 98.7 percent were not the most up-to-date versions.
With eight to nine out of every 10 software projects using open source software, improving the security of open source components in programs is a growing industry concern. Another recent study from Rapid7 argued that open source projects need better vulnerability reporting practices, while several new bug bounty programs have launched offering rewards to researchers who find flaws in popular open source tools. Nonetheless, the issue is not so much that open source libraries are generally unsafe as that developers may need to pay more attention to the elements they incorporate into their own programs.
“Open-source communities are very diligent and go through a lot of trouble fixing and identifying problems,” White Source CEO Remi Sass told Dark Reading. “The real issue is the disconnect between that community and its end users.”
As companies look to implement secure open source code in their products, they can benefit from applying the same code review methodologies to open source libraries as they would with their own custom code. Using static analysis software, they can quickly examine existing code bases and look for potential problems – work that can also help improve the overall open source project. With increased attention being paid to open source vulnerabilities, companies can cover their bases by taking such precautions.
Software news brought to you by Klocwork Inc., dedicated to helping software developers create better code with every keystroke.
As someone responsible for a mobile automation product, I spend a reasonable amount of time looking at the competition. I particularly like when I see some innovation, something that forces us to do even better and stay ahead of everyone else. Usually, this innovation comes from small company or from the open source community. Appium has been on my radar ever since it came out, but I waited for Appium to get some maturity before trying it out. My conclusion is very simple: Appium is not ready for prime time. It reminds me of Selenium and webDriver in its infancy. Not something you want to invest in for serious work. Maybe fine for individual developers to build some smoke tests, but if you have an enterprise app for example, with complex gestures and a lot of tests to build and maintain, Appium is not a good candidate, yet.
These are the main shortcomings I’ve noted. Some are really showstoppers for me.
#1 – No support for intelligent waits.
This is the biggest shortcoming and a big red flag. You can code some time delays (polling loops) and that’s it. A time delay is the WORST mechanism to manage your page flow or address back-end response time variability. Reasons being:
- How long should I wait after a back-end call? Yeah, let’s put 10 seconds, it should be way enough. It might work for 90% of the time and you end up spending time investigating errors for 10% of your test cases. The goal of automation is to find regression issues in your app! Not problems with your environment.
- How long should I wait to account for device performance? Transition between pages depends a lot on the performance of your device. Might be almost instantaneous on the latest ipad Air for example with their new CPU and could be crawling on older iOS devices. So should you add 2 secs each time you transition between screens in your app? Should be enough right? Sure, it should be fine. Multiply these 2 seconds by your number of transitions, number of tests, number of devices and you end up with a lot of hours spent WAITING! When you have 10000 tests to run during the night for example, that’s not something you can afford. Especially when developers need feedback FAST!
Without being able to wait on a UI element characteristics (visibility, value, etc.) to manage page flow or back-end call, an automation framework for mobile is pretty much worthless in my book.
#2 – On iOS, you can only execute one test at a time per Mac.
This is a limitation coming from Apple Instruments that Appium uses for execution. I’m working with customers running 5000+ tests per night, on 30+ devices, all in parallel. It would take them many days to run all their tests sequentially. This is a SERIOUS limitation and I don’t know how Appium is going to solve this since they have no control over Apple Instruments. You can always buy 1000 Mac Minis! Yeah right.
#3 – Limited support for gestures.
#4 – Limited support for Android < 4.1
Appium only supports API Level >=17. If you look at the number of devices running the different versions of Android (via the Android Dashboard) you realize that Appium can only support a very limited share of the market. Do you want to invest seriously in a framework that only allows you to support 18.2% of your market? I don’t think so. I thought it was pretty crazy to have such limited support and I dug a bit. Apparently, you can support Android < 4.1 by using a Selendroid library but it’s not for the faint of heart by judging by the number of people struggling to make it work.
These are my 4 reasons why I think Appium is not ready for serious work. They’re showstoppers in my book. I’m also not a big fan of the approach Appium is taking, which is fairly similar to Selenium/Webdriver. If you like to write tons of code for your functional tests, be my guest. Of all the WebDriver implementations I’ve seen in my career, a lot required a big investment, especially on the maintenance side. But I haven’t seen all projects in the world.
Appium is young and it might grow to something fit for mobile. And honestly, I hope it will! I’m a big fan of open source and the innovation it brings. But mobile is a big business and the product you pick to test your app is an important choice and investment, and you want to make sure it covers all your requirements. I just don’t think Appium is there yet.Get Shareaholic
Today, it’s easier said than done. The agile method has long been discussed in the software development space, but how many companies have actually migrated to it?
Gradually enterprise development teams have not only recognized the benefits of agile, but have successfully shifted from their waterfall methodologies. Still, others have lagged behind sticking to their longer waterfall dev methods. Now the pace of technology is beginning to leave these companies with no choice. As Matt Asay in ReadWrite says, agile is no longer an alternative:
“Agile development is no longer an alternative way to develop software. With the pace of technology adoption accelerating at a frenetic pace, agile is increasingly the only way to develop software. That is, if you want to stay in business.“
The number of mobile, tablet and connected devices is growing rapidly. But beyond that, usage rates are higher than any other. Rita McGrath, in the Harvard Business Review, covered the adoption rates from today compared to adoption of older forms of technology:
“ It took 30 years for electricity and 25 years for telephones to reach 10% adoption but less than five years for tablet devices to achieve the 10% rate. It took an additional 39 years for telephones to reach 40% penetration and another 15 before they became ubiquitous. Smart phones, on the other hand, accomplished a 40% penetration rate in just 10 years, if we time the first smart phone’s introduction from the 2002 shipment of the first BlackBerry that could make phone calls and the first Palm-OS-powered Treo model.
It’s clear that in many arenas things are indeed speeding up, with more players and fewer barriers to entry.”
This increased adoption requires an efficient, more flexible model of development. Agile helps teams reach release sooner, forcing them to be more efficient with their time and efforts. It helps companies of all sizes keep pace with the adoption of technology, with the competition and with their user’ expectations. That is… if it’s done correctly.
Even Asay admits agile isn’t the picture perfect solution, “Agile development isn’t some holy grail that will solve all a developer’s problems, but it is a savvy way to keep pace with technology adoption and to tackle large-scale development projects.” There are many things that can go wrong with an agile method, such as overlooked design and planning, condensed testing and a lessened focus on fragmentation. That’s why it’s critical that brands move beyond agile, maintaining efficiency and quality. That way brands can keep up with the pace of technology – and maintain their brand reputation.
For more resources on Beyond Agile, download this free whitepaper>>
As Microsoft prepares to end support for Windows XP, a new privilege escalation exploit in the operating system has emerged, highlighting the likely upcoming security tumult once support ends. The exploit, which uses vulnerabilities in certain versions of Adobe Reader, enables an attacker to gain full administrative privileges through the Windows XP kernel. As the final date for Microsoft support of Windows XP grows closer, it also serves as a warning to developers of the enduring potential for zero-day vulnerabilities in their products.
In the wild, the vulnerability, which was discovered by FireEye Labs, allows for local privilege escalation by using a previously patched exploit in Adobe Reader 9.5.4, 10.1.6, 11.0.02 and prior versions. The shellcode decodes a privilege escalation payload from a malicious PDF and drops it in the temporary directory.
“The vulnerability is an elevation of privilege vulnerability,” Microsoft stated in an advisory acknowledging the flaw. “An attacker who successfully exploited this vulnerability could run arbitrary code in kernel mode. An attacker could then install programs; view, change or delete data; or create new accounts with full administrative rights.”
The future threat landscape
Effectively, the exploit provides a way around Adobe’s sandbox, several experts noted. Qualys CTO Wolfgang Kandek told Dark Reading that this type of attack, which strings together multiple vulnerabilities to deliver a workaround for security measures, is becoming more popular as vendors build more robust protections into their products.
“Most attackers need to chain together multiple vulns,” he explained. “I think this is in that spirit.”
While the vulnerability is mitigated by the fact that the attacker must already have access to the machine and can be easily prevented by updating to current versions of Adobe Reader and Windows, the fact that it is appearing in the wild may also be a harbinger of future problems for Windows XP. Microsoft will cease to provide support for the operating system on April 8, 2014, and many attackers are likely waiting with zero-days in hand to begin preying on remaining users, experts told Threatpost.
“From a security perspective, this is a really important milestone,” Microsoft spokesperson Holly Stewart told the site. “Attackers will start to have a greater advantage over defenders. There were 30 security bulletins for XP this year, which means there would have been 30 zero-day vulnerabilities on XP [without support].”
For developers, the likely flood of XP exploits and the increasingly common trend of multi-vulnerability chains are both reminders that the attempts to penetrate software’s defenses do not end when new security features are added or the program reaches the end of its life. To prevent against ongoing hacker activity, developers can strengthen their programs by building in more software security during the original development process. Using tools such as static analysis software, coders can catch errors that could lead to exploits down the line and eliminate them far before the end-of-support deadline for a product draws near.
Software news brought to you by Klocwork Inc., dedicated to helping software developers create better code with every keystroke.
Games are not always about winning or losing. Each game can have different objectives. In my early youth I started with role playing games and later on story telling games. At the time, people who knew little about it sometimes asked us, “Who is winning?”. Being 7 years old, I didn’t really know what to say, in a good way, to explain to this grown-up that winning or losing is not always the objective. Instead, when we were role playing we worked as a group to go on with the story, to gain experience, to improve the character with skills and items and, probably above all, to have fun.
35 years later, I still hear the same question popping up in role playing situations, but also in similar situation such as when having to do with gamification. When we talk about gamification of testing, I feel it really demotivating if we were to compete against each other within the team or organization at a regular basis. I have already seen and experienced instances in my past when we had leader boards counting bugs in various fashion. We had really destructive discussions, as I see it, about if those who reported the least bugs really provided value to the group.
In a recent article  by Shrini Kulkarni approaches gamification of testing with the mindset of competition. I will look at a few of his arguments.
“This definition is provisional one – I might not be considering types of games that do not falling in this category. I wonder if there is any game where there is notion of victory or defeat.”
Yes, there are many different types of games. Believing that there is only games about winning and losing is too narrow. Some games are for introducing people to each other, others for passing time, others such as role playing and story telling games can be about solving puzzles/mysteries as a group, where you act as a different persona than your own you. The list is infinite.
“How about goals or objectives of a player or team playing games in the first place? Winning of course!”
Richard Bartle  investigated the objective behind different play styles in Massive Multiplayer Online Games (MMO) and Multi-User Dungeons (MUD) and discovered that only a fraction of the players had the objective of winning. Instead, there were other aspects, such as socializing, that were more interesting and in focus.
“How many times you heard the statement “it is not important to win or lose, participation and competing with ones best ability is important”. So if you lose do not feel bad – there is always another chance.”
In a testing context, working against others by competition would, as I see it, harm the organisation and the teams. As I stated in my previous article , when considering gamification you need to consider the regular traps of testing in order to work on areas that provide value. But you also need to consider how to get a good working environment. A competetive environment might not be the best solution? I do not see it fruitful to compete on many of our test activities such as information gathering. How do you weigh one type of information over another? Would you in order to win choose to not share valuable information to others in the team?
“A good test strategy in testing is same as winning strategy in games. But then – what is the meaning of winning the game of testing? Against who?”
We really do not have an opponent in our test strategy, as I see it. Still we can use many aspects from strategies of games and war in our reasoning. I often look for inspiration in the writing of Sun Tzu, Carl von Clausewitz and other strategists. Still, when considering strategy for testing I see it rather as meeting different objectives or goals for retrieving information, instead of winning.
Jonathan Kohl has taken elements from MMO-type-of-gaming and considered quests and adventuring . I believe this is an excellent area to look at. In most situations in role playing, the group cooperates working towards a set of goals or objectives, just like you would as a tester in teams.
So, for those of you who start to dig into the world of gamification in testing… do look beyond the regular winning/losing concept. There are more aspects at play here. Instead, see gamification itself as a complex system, which you in turn apply to other systems in order to enhance cooperation, motivation, feedback and learning among many things.
Community Update (2013/12/05) – What is OWIN, Azure Caching in MVC, WebAPI Deep Tutorial and some tools
So what do we have here? A very clear explanation of what is OWIN by Robert Muehsig (MVP). It’s a must read to finally know what OWIN is all about. Then there’s some way to optimize your MVC with Azure caching and compiled views and finally, we have an amazing multi-part tutorial on WebAPI. Don’t forget to scroll at the bottom of the article to see the other parts.
I'm going to visit Sao Paulo once again this weekend to attend the second annual Jenkins users meet-up. It's a whole day free event Saturday full of Jenkins goodness.
You'll hear from a number of active Jenkins folks, and I'll be presenting about what CloudBees (where I currently work) has contributed to the Jenkins project, including recent new OSS plugins and some services. I'm also stuffing my suitcase with lots of give-aways, including Jenkins stickers and popular Jenkins bobble heads. I don't intend to bring anything back to the U.S.
The morning half of the event is a cross-atlantic hackathon between Brazil and Copenhagen. you can check what's being planned on the western side of the ocean and the eastern side of the ocean. The afternoon half is a series of presentations. Please come join us. I'm really looking forward to seeing you!
I'll be in Sao Paulo for the whole Sunday and Monday as well. If you are interested in talking to me, please feel free to drop me a note.