Skip to content

Feed aggregator

In a post-Heartbleed world, firms need to scrutinize their open source security

Kloctalk - Klocwork - Fri, 07/11/2014 - 15:30

For quite a while, open source security solutions enjoyed a virtually unbroken string of successes, with little in the way of negative news surrounding these offerings. Then came Heartbleed. Undoubtedly the most significant setback to open source security ever discovered, the Heartbleed vulnerability exposed a tremendous percentage of the total Internet to possible cyberthreats.

In light of this revelation, many industry experts have offered their thoughts on the future of open source. Speaking at Computing's recent Enterprise Security and Risk Management Summit in London, a number of panelists asserted that Heartbleed should indeed cause businesses to scrutinize their open source security efforts, but that it is too late to abandon such solutions altogether.

Open source on the loose
The discussion began when Computing Magazine editor Stuart Sumner asked the panelists whether Heartbleed should cause business decision-makers to be more doubtful toward open source software. In response, Marc Lueck, director of global threat management at publishing house Pearson, argued that there is really no longer any choice in the matter.

"We don't have the opportunity to change our minds now, we're using open source, that decision is made," he said, the news source reported. "We now need to figure out how to fix it, how to solve it, how to protect ourselves from decisions that have already been made."

Lueck is far from the only expert to share this view. Writing for ZDNet, Steven J. Vaughan-Nichols recently argued that the "future belongs to open source," even despite the Heartbleed revelation.

"Outside of Apple and Microsoft, everyone, and I mean pretty much everyone, has already decided that open source is how they'll develop and secure their software. Google, Facebook, Yahoo, Wikipedia, Twitter, Amazon, you know all of Alexa's top ten websites in the world, rely on open-source software every day of the year," Vaughan-Nichols wrote.

Scrutiny needed
The significance of Heartbleed​, therefore, is not that companies need to reconsider their commitment to open source, but rather that firms should make more of an effort to ensure that these solutions are fully protected and applicable to a given situation, as Ashley Jelleyman, head of information assurance at BT and a participant in the Computing panel, explained.

"I think the real issue is not whether it's open source or closed source, it's actually about what you do with it and how you actually evaluate it to make sure it's fit for purpose," said Jelleyman, Computing Magazine reported. "It's have we checked this through, are we watching what it's doing?"

This thought goes back to one of the most popular notions concerning open source security: The idea that with enough eyes, all bugs are shallow. With proprietary solutions, a few oversights could potentially lead to a serious software security flaw, but open source enables and, theoretically, requires more people to examine any given piece of code. This dramatically reduces the likelihood that a major vulnerability will persist for long.

Yet such an oversight is precisely what happened to OpenSSL, leading to the Heartbleed flaw. Essentially, every organization that leveraged Heartbleed assumed that the software had been thoroughly vetted. It was so popular, it seemed inevitable that someone at some point would have noticed if there was any real vulnerability.

To protect themselves from future open source security risks, organizations need to take a closer look at their open source practices, rather than relying heavily on assumptions. By adopting a more critical posture to understand where open source is being used and the associated risks, firms can embrace open source and all of the advantages it entails without compromising their cybersecurity capabilities.

Categories: Companies

New Online Course: Beautiful Builds and Continuous Delivery Patterns

ISerializable - Roy Osherove's Blog - Fri, 07/11/2014 - 10:31

 

I somehow forgot to blog about this, but it’s never too late. My new online course Beautiful Builds and Continuous Delivery patterns is now available and is $25 until the end of this month (July).

Here’s the course description:

Ah, Continuous Delivery. Everybody and their sister are talking about it, but in real life, nothing is ever as simple as listening to a conference talk about it.

  • Can you really deploy 20 times a day if you QA department is breathing down your neck that they are using the staging server between 9 and 5?
  • Are the teams waiting for each other to finish their work , creating bottlenecks?
  • Is security threatening to have you fired for even suggesting you deploy to production?

In this course Roy Osherove, author of the books “Beautiful Builds” (still in progress, actually) and “The Art of Unit Testing”, discusses common problems and solutions (patterns!) during build automation and continuous delivery.

We start from the basics, defining the differences between automated builds and CI, separation of concerns in build management, and then move on to more advanced things such as making builds faster using artifacts, solving versioning issues with snapshots, cross-team dependencies, and much more.

  • All videos are both streamable AND downloadable to watch offline, no DRM.

More info at beautifulbuilds.com.

 

Categories: Blogs

Should you write unit tests or integration tests?

ISerializable - Roy Osherove's Blog - Fri, 07/11/2014 - 10:22

I got this question in the mail. I thought it was quite valid for many other people:

Question:

 

Trying to promote unit tests in a new work place: the “search” action from the UI goes through IOC container which calls a WCF service, where the search itself is done using entity framework auto generated code. My colleague claims that due to the multiple dependencies, it doesn’t make sense to fake everything when running the unit test, and it makes more sense to do integration tests in this case.

Is my hunch correct? Should I put efforts into implementing unit tests in this complex scenario?

 

My Answer:

You can go either way.

It really depends on time tradeoffs
  1. How long will it take to write the tests? For integration tests you will have to wait until the whole system is complete to get the tests even failing, unless you start from a webpage, and then they will fail for as long as the entire system of layers is not built.
  2. How long does it take to run and get feedback? integration tests take longer to run usually. and are more complicated to setup. But when they pass you get a good sense of confidence that all the parts work nicely with each other. with unit tests you will get faster feedback, but you will still need those integration tests for full system confidence. On the other hand with integration tests and no unit tests the developers will have to wait in long cycles before they know if they broke something.
There is no one answer. I usually do a mix of both. For web systems I might even start with acceptance tests that fail, and then fill the system slowly with unit tests for parts of the features with more focused functionality.  I guess you can change the question to : What type of tests should we write FIRST? Only you know that. it changes for every project and system.

 

Categories: Blogs

Software Testing T~log! (help needed)

Yet another bloody blog - Mark Crowther - Fri, 07/11/2014 - 02:41
Hey All,

Well, I finally got my equipment and software sorted out to be able to Vlog. That is, Video Log, creating a kind of video based diary or End of Day Stand-up.

However, it's not that life diary kind of Vlog, it's a Testing Vlog or as I'm going to call it a T~log! Yay, as in Testing kinda like video blog :) #t~log! is my official new hashtag. 

I've been wanting to do these for a while but couldn't nail the format. In the end it struck me there was more than enough stuff happening in a testing day to chat to the testing community about for 10 to 15 minutes.

In the latest video I mention an new paper on the main site, An Approach to Project Sizing and Complexity - grab it here: http://www.cyreath.co.uk/papers.html. There's also the London Tester Gathering on the 20th July, let people know you're coming by visiting the Meetup site http://www.meetup.com/agiletesting/. I also mention the CIA style guide, testing feeds and plans for the next videos.

Once I work out the tech, I hope to get others on the t~log! and pull in more stuff than just from my own testing day. It would be great to have a literal EoD Stand-up t~log! of what happened today in testing. Bare with me while I polish the format...


but... I need your help
I will: 

  • Try and t~log! daily (Mon to Thursday minimum)
  • Call out meetings, events, sites, blogs, resources, etc. that are of interest to the community
  • Get others to t~log! with me
  • Keep the t~log! informative and provide links to resources etc

I need you to:



Thanks in advance,

Mark

Subscribe, Watch, Rate!http://www.youtube.com/subscription_center?add_user=cyreath

#t~log!




Categories: Blogs

Throwback Thursday: 80’s Tech at its Best

uTest - Thu, 07/10/2014 - 22:46

The 80’s brought with it an incredible range of technology that for better or worse shaped the age we live in now. For this TBT, we’ll be having a quick look at some of the more surreal/novel items that came from the land of neon and synth.

tech1

The Private Eye, brought to us by Reflections Technology, allowed the wearer to view a 1-inch LED screen with image quality comparable to a 12-inch display. Released in 1989, the Private Eye head-mounted display was used by hobbyists and researchers alike, going on to become the subject of an augmented reality experiment in 1993. To think that this type of wearable technology has only been tapped into fully within the past 3 years is pretty mind-blowing.

tech2

The Stereo Sound Vest provides the wearer with a $65 portable speaker solution to provide a ‘safer’ listening option without the use of headphones. With zip-off sleeves, it’s a wonder this wasn’t all the rage.

tech3

This all-in-one player included an AM-FM stereo, microcassette player, recorder-player, calculator, and a digital alarm clock that fit in your hand. This was the Swiss Army knife of media at the time…and boy was it a looker!

What’s your favorite piece of 80s tech nostalgia that you yearn for? Be sure to let us know in the comments.

Categories: Companies

uTest Non-profit Partner Brings 150 Software Testing Jobs to the Bronx

uTest - Thu, 07/10/2014 - 19:43

extralargeIT job training non-profit Per Scholas plans to bring 150 new software testing jobs to the Bronx, New York, this Fall when it opens a large software testing center there.

According to a DNAinfo.com news story:

Per Scholas, which is based in The Bronx, and the IT consulting company Doran Jones plan to open the roughly $1 million, three-story, 90,000-square-foot software testing center at 804 E. 138th St., near Willow Avenue.

All of the entry-level jobs will be sourced from Per Scholas graduates, and the boom of 150 new jobs is widely expected to open a lot of doors not usually available in the urban Bronx neighborhood. Keith Klain, co-CEO of Doran Jones, hopes to see the center eventually grow to 500 employees.

As a proud partner of Per Scholas, uTest was there for the groundbreaking of the testing center earlier in 2014, and looks forward to many more lives that we can collectively influence.

Per Scholas is a non-profit with the mission of breaking the cycle of poverty by providing technology education, access, training and job placement services for people in underserved communities.

 

Categories: Companies

Stop Comparing Software Delivery With Manufacturing!

James Betteley's Release Management Blog - Thu, 07/10/2014 - 17:54

A couple of weeks ago I was at an Experience Devops event in London and I was talking about how software delivery, which is quite often compared to a manufacturing process, is actually more comparable to a professional sports team. I didn’t really get time to expand on this topic, so I thought I’d write something up about it here. It all started when I ran a cheap-and-nasty version of Deming’s Red Bead Experiment, using some coloured balls and an improvised scoop…

The Red Bead Experiment

I was first introduced to Deming’s Red Bead Experiment by a guy called Ben Mitchell (you can find his blog here). It’s good fun and helps to highlight how workers are basically constrained by the systems the work in. I’ll try to explain how the experiment works:

  • You have a box full of coloured beads
  • Some of the beads are red
  • You have a paddle with special indentations, which the beads collect in (or you could just use a scoop, like I did).
  • You devise a system whereby your “players” must try to collect exactly, let’s say, 10 red beads in each scoop.
  • You record the results

Now, given the number of red beads available, it’s unlikely the players will be able to collect exactly 10 beads in each scoop. In my especially tailored system I told the players to keep their eyes closed while they scooped up the balls. I also had about half as many red beads as any other colour (I was actually using balls rather than beads but that doesn’t matter!). The results from the first round showed that the players were unable to hit their targets. So here’s what I did:

  • Explain the rules again, very clearly. Write them down if necessary. Being as patronising as possible at this point!
  • Encourage the players individually
  • Encourage them as a team
  • Offer incentives if they can get the right number of red beads (free lunch, etc)
  • Record the results

Again, the results will be pretty much the same. So…

  • Threaten the individuals with sanctions if they perform badly
  • Pick out the “weakest performing” individual
  • Ask them to leave the game
  • Tell the others that the same will happen to them if they don’t start hitting the numbers.

In the end, we’ll hopefully realise that incentivising and threatening the players has absolutely zero impact on the results, and that the numbers we’re getting are entirely a result of the flawed system I had devised. Quite often, it’s the relationship between workers and management that gets the attention in this experiment (the encouragement, the threats, the singling out of individuals), but I prefer to focus on the effect of the constraining system. Basically, how the results are all down to the system, not the individual.

Thanks Kanban!

I think one of the reasons why the software industry is quite obsessed with traditional manufacturing systems is because of the Toyota effect. I’m a huge fan of the Toyota Production System (TPS), Just-in-time production (JIT) Lean manufacturing and Kanban – they’re all great ideas and their success in the manufacturing world is well documented. Another thing they all have in common is that various versions of these principles have been adopted into the software development world. I also happen to think that their application in the software development world has been a really good thing. However, the side-effect of all this cross-over has been that people have subconsciously started to equate software delivery processes with manufacturing processes. Just look at some of the terminology we use everyday:

  • Software engineering 
  • Software factories
  • Kanban
  • Lean
  • Quality Control (a term taken directly from assembly lines)

It’s easy to see how, with all these manufacturing terms around us, the lines can become blurred in people’s minds. Now, the problem I have with this is that software delivery is NOT the same as manufacturing, and applying a manufacturing mindset can be counter-productive when it comes to the ideal culture for software development. The crucial difference is the people and their skillsets. Professionals involved in software delivery are what’s termed as “knowledge workers”. This means that their knowledge is their key resource, it’s what sets them apart from the rest. You could say it’s their key skill. Manufacturing processes are designed around people with a very different skillset, often ones that involve doing largely repetitive tasks, or following a particular routine. These systems tend not to encourage innovation or “thinking outside of the box” – this sort of thing is usually assigned to management, or other people who tend not to be on the production line itself. Software delivery professionals, whether it be a UX person, a developer, QA, infrastructure engineer or whatever, are all directly involved in the so-called “production line”, but crucially, they are also expected to think outside of the box and innovate as part of their jobs. This is where the disconnect lies, in my opinion. The manufacturing/production line model does NOT work for people who are employed to think differently and to innovate.

If Not Manufacturing Then…

Ok, so if software delivery isn’t like manufacturing, then what is it like? There must be some analogous model we can endlessly compare against and draw parallels with, right? Well, maybe…

 

home sweet home

home sweet home

I’m from a very rural area of west Wales and when anyone local asks me what I do, I can’t start diving into the complexities of Agile or devops, because frankly it’s so very foreign to your average dairy farmer in Ceredigion. Instead, I try to compare it with something I know they’ll be familiar with, and if there’s one thing that all people in west Wales are familiar with, it’s sheep rugby.

It’s not as daft as it sounds, and I’ve started to believe there’s actually a very strong connection between professional team sports and Agile software development. Here’s why:

Software delivery is a team effort but also contains subject matter experts who need to be given the freedom to put their skills and knowledge to good use – they need to be able to improvise and innovate. Exactly the same can be said of a professional rugby or soccer (yes, I’m going to call it soccer) teams. Rugby and soccer are both team sports but both contain very specific roles within that team, and for the teams to be successful, they need to give their players the freedom and space to use their skills (or “showing off” as some people like to call it).

2008 World Player of the Year Shane Williams

2008 World Player of the Year Shane Williams

Now, within a rugby team you might have some exceptionally talented players – perhaps a winger like former World player of the year Shane Williams. But if you operate a system which restricts the amount of involvement he gets in a game, he’ll be rendered useless, and the team may very well fail. Even with my dislike of soccer, I still think I know enough about how restrictive formations and systems can be. The long ball game, for instance, might not benefit a Lionel Messi style player who thrives on a possession & passing game.

The same can be said of software delivery. If we try to impose a system that restricts our individual’s creativity and innovation, then we’re really not going to get the best out of those individuals or the team.

 

So Where Does Agile Fit Into All of This?

Agile is definitely the antidote to traditional software development models like Waterfall, but it’s not immune from the same side-effects as we witness when we do the red bead experiment. It seems to be that the more prescriptive a system is, the greater the risk is of that system being restrictive. Agile itself isn’t prescriptive, but Kanban, XP, Scrum etc, to varying degrees are (Scrum more prescriptive than Kanban for instance). The problem arises when teams adopt a system without understanding why the rules of that system are in place.

prescriptive = restrictive

For starters, if we don’t understand why some of the rules of Scrum (for instance) exist, then we have no business trying to impose them on the team. We must examine each rule on merit, understand why it exists, and adapt it as necessary to enable our team and individuals to thrive. This is why a top-down approach to adopting agile is quite often doomed to fail.

So What Should We Do?

My advice is to make sure everyone understands the “why” behind all of the rules that exist within your chosen system. Experiment with adapting those rules slightly, and see what impact that change has on your team and on your results. Hmmm, that sounds familiar…

 Plan, Do, Check, Act

The Deming Cycle: Plan, Do, Check, Act

 


Categories: Blogs

Understanding Application Performance on the Network – Part V: Processing Delays

In Part IV, we wrapped up our discussions on bandwidth, congestion and packet loss. In Part V, we examine the four types of processing delays visible on the network, using the request/reply paradigm we outlined in Part I. Server Processing (Between Flows) From the network’s perspective, we allocate the time period between the end of […]

The post Understanding Application Performance on the Network – Part V: Processing Delays appeared first on Compuware APM Blog.

Categories: Companies

CloudBees Announces Public Sector Partnership with DLT Solutions


Continuous Delivery is becoming a main initiative across all vertical industries in commercial markets/private markets. The ability for IT teams to deliver quality software on a hourly/daily/weekly basis is the new standard.

The public sector has the same needs to accelerate application delivery for important governmental initiatives. To make access to the CloudBees Continuous Delivery Platform easier for the public sector, CloudBees and DLT Solutions have formally joined hands in order to help provide Jenkins Enterprise by CloudBees and Jenkins Operations Center by CloudBees to federal, state and local governmental entities.

With Jenkins Enterprise by CloudBees now offered by DLT Solutions, public sector agencies have access to our 23 proprietary plugins (along with 900+ OSS plugins) and will receive professional support for their Jenkins continuous integration/continuous delivery implementation.

Some of our most popular plugins can be utilized to:
  • Eliminate downtime by automatically spinning up a secondary master when the primary master fails with the High Availability plugin
  • Push security features and rights onto downstream groups, teams and users with Role-based Access Control
  • Auto-scale slave machines when you have builds starved for resources by “renting” unused VMware vCenter virtual machines with the VMware vCenter Auto-Scaling plugin
Try a free evaluation of Jenkins Enterprise by CloudBees or read more about the plugins provided with it.

For departments using larger installations of Jenkins, CloudBees and DLT Solutions propose Jenkins Operations Center by CloudBees to:
  • Access any Jenkins master in the enterprise. Easily manage and navigate between masters (optionally with SSO)
  • Add masters to scale Jenkins horizontally, instead of adding executors to a single master. Ensure no single point of failure
  • Push security configurations to downstream masters, ensuring compliance
  • Use the Update Center plugin to automatically ensure approved plugin versions are used across all masters
Try a free evaluation of Jenkins Operations Center by CloudBees, or watch a video about Jenkins Operations Center by CloudBees.

The CloudBees offerings, combined with DLT Solutions’ 20+ years of public sector “know-how”, makes it easier to support and optimize Jenkins in the civilian, federal and SLED branches of government.

For more information about the newly established CloudBees and DLT Solutions partnership read the news release.

We are proud to partner with our friends at DLT Solutions to bring continuous delivery to governmental organizations.

Zackary Mahon
Business Development Manager
CloudBees

Categories: Companies

Integrating TestTrack with Git and Other Source Control Providers

The Seapine View - Thu, 07/10/2014 - 12:00

TestTrack 2014.1 introduces source control integration with Git, GitHub, and other external providers. This integration allow users to attach source files to TestTrack items when pushing changes to the source control server, which can help team members keep track of source file changes and quickly find information in their source control tool while working in TestTrack.

The TestTrack administrator is responsible for setting up the integration components. First, install and configure the new source control provider CGI (ttextpro.exe) on the TestTrack web server. This CGI accepts attachment data from the source control provider and sends it to the TestTrack Server. See the TestTrack installation help for information about installing and configuring the CGI.

Next, in the TestTrack Client, add the source control provider to generate the required integration key. When adding providers, you can also enter commit and file URLs to specify the format for links included with attachment information on the Source Files tab in items.

GitExampleSourceControlProvidersDialog

Finally, use the new source control provider API to create hook scripts that pass attachment data from your source control provider to the TestTrack source control provider CGI, and install the scripts in the central and local Git repositories. These scripts must include the provider key from the Source Control Providers dialog box in TestTrack to work correctly. To create these scripts, you should understand how to use JSON to pass data from your source control provider. Sample commit-msg and post-receive scripts are available, which respectively verify items exist in the project when committing changes to a local Git repository and attach files to items when changes are pushed to the Git Server. You may want to use the sample scripts as a handy reference when creating scripts for your integration. You can also contact Seapine Services for help creating scripts.

After the integration is configured, users can attach source files to TestTrack items when they push changes to the source control server. To attach Git files to items, enter the tag for the item in the commit message. For example, enter [IS-34] to attach the commit to issue 34.

GitExampleCommit

To view the attached files in TestTrack, click the Source Files tab when working with an item. Click a file path or commit message to view additional file information in the associated source control viewer.

GitExampleSourceFilesTab

For more information about integrating TestTrack with Git or other source control providers, see the TestTrack help.

Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

5 things you didn’t know a testing framework could do

BugBuster - Thu, 07/10/2014 - 11:43

We at BugBuster believe that testing frameworks and cloud platforms can take test engineering to a higher ground.

Case in point: we included in BugBuster several cool features you wouldn’t believe a regular testing framework could do: smart exploration, file upload testing, web-to-email testing, language testing and getting rid of those annoying sleeps and waits, normally required to deal with asynchronous user interfaces.

 

Smart exploration

BugBuster is a webkit-based browser that runs on several dedicated servers in our cloud, and features advanced logic enabling automated smart explorationBugBuster will load a page, analyze its DOM, and produce a list of actions it can do. Then, it will try any combination of these actions in a deterministic way, uncovering bugs and issues that can easily be reproduced for debugging.

Doing so, BugBuster is able to test those edge cases that a human tester would not necessarily have found. And, thanks to our conductor API, you have full control over the automated testing process; as a result, you can let BugBuster crawl around your website and react with a precisely defined behaviour when BugBuster reached a certain location or executes a certain action.

How BugBuster's automated exploration works

How BugBuster’s automated exploration works

Timing insensitive testing: getting rid of sleeps and waits

BugBuster’s technology behaves differently from other testing frameworks in that it is timing-insensitive: after each action, it will wait for the page to stabilize before executing the next action in the queue. BugBuster detects AJAX calls, JavaScript animations and page loads, allowing you to avoid cumbersome sleep() and wait() function calls.

This code snippet shows how easy it is to guide BugBuster through a test scenario that involves page loads:

File upload testing

File uploads are a major headache when writing automated tests for a web application. BugBuster helps lighten this burden by providing advanced file upload capabilities. Several tools and frameworks support also this feature, but they are limited and depend heavily on the browser being used for the automation. BugBuster improves this feature in several ways:

  • Providing a catalog of pre-built files ready to use in your tests.
  • Providing a file generator that can produce different file formats and sizes with your own content, on the fly.
  • Supporting multiple file uploads.
  • Allowing to check the accepted file types in the file chooser.

The following code illustrates how easy it is to generate a file to test a file uploads:
You can find more information about automated file upload testing in this post.

Web-to-email testing

BugBuster provides more control over the test process while considerably simplifying both writing the test scenarios and the underlying infrastructure. It generates an email address on-the-file linked to the running test session; then, BugBuster’s incoming email servers will receive the message and dispatch it to the right running session. The result? You can neatly write test cases that have access to the whole email document and envelop: subject, body, headers, addressees, sender, and so on, without need to set up any email server.

How BugBuster closes the web to email testing loop

How BugBuster closes the web-to-email testing loop

You can read more about web-to-email testing on this post.

Language testing

It is not unusual to see multi-language websites with text displaying in the wrong language. At BugBuster we have a solution for that: thanks to our language detection module, you can now automate the detection of languages in your test scenarios.

Using it is quite straightforward: just require the language module and use its detect() function to do all the heavy stuff for you. Then, you can use the result of the detection to know what  languages are present on a particular page.

Say you have a multilingual CMS or web application, available in English, French, German, Chinese and Spanish, and you want to ensure that all the pages are presented to the user in the same language, in this case English. The following code will allow BugBuster to crawl your application by repeating the same language test on each page it discovers:

You can read more about language testing on this post.

The post 5 things you didn’t know a testing framework could do appeared first on BugBuster.

Categories: Companies

.NET in SonarQube: bright future

Sonar - Thu, 07/10/2014 - 11:12

A few months ago, we started on an innocuous-seeming task: make the .NET Ecosystem compatible with the multi-language feature in SonarQube 4.2. What followed was a bit like one of those cartoons where you pull a string on the character’s sweater and the whole cartoon character starts to unravel. Oops.

Once we stopped pulling the string and started knitting again (to torture a metaphor), what came off the needles was a different sweater than what we’d started with. The changes we made along the way – fewer external tools, simpler configuration – were well-intentioned, and we still believe they were the right things to do. But many people were at pains to tell us that the old way had been just fine, thank you. It had gotten the job done on a day-to-day basis for hundreds of projects, and hundreds-of-thousands of lines of code, they said. It had been crafted by .NETers for .NETers, and as Java geeks, they said, we really didn’t understand the domain.

And they were right. But when we started, we didn’t understand how much we didn’t understand. Fortunately, we have a better handle on our ignorance now, and a plan for overcoming it and emerging with industry leading C# and VB.NET analysis tools.

First, we’re planning to hire a C# developer. This person will be first and foremost our “really get .NET” person, and represents a real commitment to the future of SonarQube’s .NET plugins. She or he will be able to head off our most boneheaded notions at the pass, and guide us in the ways of righteousness. Or at least in the ways of .NETness.

Of course it’s not just a guru position. We’ll call on this person to help us progressively improve and evolve the C# and VB.NET plugins, and their associated helpers, such as the Analysis Bootstrapper. He (or she) will also help us fill the gaps back in. When we reworked the .NET ecosystem there were gains, but there were also loses. For instance, there are corner cases not covered today by the C# and VB.NET plugins which were covered with the old .NET Ecosystem.

We also plan to start moving these plugins into C#. We’ve realized that just can’t do the job as well in Java as we need to. But the move to C# code will be a gradual one, and we’ll do our best to make it painless and transparent. Also on the list will be identifying the most valuable rules from FxCop and ReSharper and re-implementing them in our code.

At the same time, we’ll be advancing on these fronts for both C# and VB.NET:

  • Push “cartography” information to SonarQube.
  • Implement bug detection rules.
  • Implement framework-specific rules, for things like SharePoint.

All of that with the ultimate goal of becoming the leader in analyzing .NET code. We’ve got a long way to go, but we know we’ll bring it home in the end.

Categories: Open Source

More Ruby goodness for testing

Did I mention how much I love Ruby?

items = (“A”..”Z”).to_a.in_groups(5,false)

5.times do | i |
puts items[i].flatten.to_s
puts “—-”
end

Source code is at http://apidock.com/rails/Array/in_groups

Categories: Blogs

Planned changes in Jenkins User Conference contact information collection



One of the challenges of running Jenkins User Conferences is to ballance the interest of attendees and the interest of sponsors. Sponsors would like to know more about attendees, but attendees are often weary of getting contacted. Our past few JUCs have been run by making it opt-in to have the contact information passed to sponsors, but the ratio of people who opt-in is too low. So we started thinking about adjusting this.

So our current plan is to reduce the amount of data we collect and pass on, but to make this automatic for every attendee. Specifically, we'd limit the data only to name, company, e-mail, and city/state/country you are from. But no phone number, no street address, etc. We discussed this in the last project meeting, and people generally seem to think this is reasonable. That said, this is a sensitive issue, so we wanted more people to be aware.

By the way, the call for papers to JUC Bay Area is about to close in a few days. If you are interested in giving a talk (and that's often the best way to get feedback and take credit on your work), please make sure to submit it this week.

Categories: Open Source

iOS 8 Crowding Out Fitness Apps?

uTest - Wed, 07/09/2014 - 21:29

This week, Apple released the latest beta of iOS 8 to developers for the iPhone, iPad, and iPod Touch. Among other additions, the fleshing out of the new Health App means bigHealthbook changes for developers.

Health, Apple’s centralized health and fitness hub app, in the initial iOS 8 preview was more of a shell, designed to take in data from third-party providers. In the Beta 3 release, however, it can now track both steps and calories on its own. Additionally, you can measure your caffeine intake as well as monitor a lengthy list of nutritional categories.

The addition of these new features shows Apple’s likely trajectory into the booming fitness app/wearable arena. While in the past they have been content to allow third-party providers handle these services, the new native health and fitness tracking functionality of iOS 8’s Health will force many developers to create value and fill in the gaps around what had been their main offering.

Is Apple encroaching on this fertile app territory the right move?

You can see an expanded list of the new features of iOS 8 beta 3 right here.

Categories: Companies

Video Courses at uTest University for Testers on the Go

uTest - Wed, 07/09/2014 - 18:30

Are you more of a visual learner? Perhaps you just don’t have the time to sift through vast chapters of knowledge as a busy tester? Video-based courses at uTest University may just be videowhat you’re looking for. The uTest University library is full of video courses for when you’re on the go, featuring topics including:

  • Accessibility
  • Test Automation (including Selenium basics)
  • Capturing logs on iOS/Android devices
  • Introductions to iOS and Android testing
  • Essentials for well-written bug reports
  • Penetration testing
  • Common testing mistakes to avoid

Take a look at all of the Video courses at uTest University today.

uTu is free for all members of the uTest Community. We are constantly adding to our course catalog to keep you educated on the latest topics and trends. If you are an expert in UX, load & performance, security, or mobile testing, you can share your expertise with the community by authoring a uTu course. Contact the team at university@utest.com for more information.

Categories: Companies

Code analytics plug-in for Optimyth Kiuwan

IBM UrbanCode - Release And Deploy - Wed, 07/09/2014 - 18:19

The Kiuwan team from Optimyth has released a Kiuwan Plugin for IBM UrbanCode Deploy for their code analytics tool. Kiuwan is a hosted code analyzer that tracks code quality and indicates where the quickest payoffs are when working to improve.

The pattern of linking code analytics to the deployment pipeline is one we are familiar with and something our AnthillPro customers have done quite a lot. Analyzing the code can be a relatively lengthy process relative to the rapid feedback delivered by a continuous integration build loop. So teams look to analyze only some of their builds. Which builds do you want to analyze? Setting a nightly build with the analyze flag set to true can work, but really you want to make sure that anything going to production has been analyzed in case the code scan finds one nasty bug. That requires that you either scan every build (too slow), know in advance which builds will be released (not agile enough), or scan every build that gets sufficiently far in the pipeline.

A typical pattern would be that when going to a test environment that is entered one or two times a day, you look up the source code attached to the build(s) being promoted and scan that code. You do have that traceability right?

The Kiuwan / UrbanCode Deploy integration works with this pattern nicely. The source code can be an artifact in Deploy, or pointed to by a version a property and retrieved at deploy time. The plugin then invokes the scan and registers the results back to the Kiuwan server. From there, you can consider automatically or manually applying quality statuses based on the results of the scan that work with Environment Gates to govern how far those builds can go. If there’s something in that scan that would preclude a production release, capture it and enforce the policy automatically.

The Kiuwan team has a nice video showing the integration in action:

Categories: Companies

Scheduling a Transfer

The Kalistick Blog - Wed, 07/09/2014 - 15:58

Like many people these days, I do much of my business online, including much of my banking. A few weeks ago, I tried to schedule a transfer through a bank’s online web-based banking application. After carefully entering my request, and clicking on submit, my browser returned a page containing the text Error 500: java.lang.NullPointerException.

Had I been an attacker probing the system for weaknesses, I would have been very excited. Not only did I learn that they’re using Java on the back end, but I confirmed that the application contains at least one bug! I can only speculate since I didn’t try, but maybe I could have made some similar requests to learn more about the application. Perhaps I could have exposed sensitive data, or found more information about the application that would allow me to launch some kind of attack.

Since I’m interested in the bank improving their application for their benefit and mine (they hold some of my money, after all), I decided to inform the bank. I notified the technical support team about what happened and called it a night.

The next morning I received a cordial reply that thanked me for reporting the issue and all that. But, instead of recognizing that I had alerted him to a real bug in the application with potential security implications, he suggested that there was something wrong with my browser or that I had done something wrong. He recommended I clear my browser’s cache and cookies and try again.

I claim the bank violated two of the best practices for running secure web applications:

  1. When anything goes wrong in the server when handling a client request, display nothing to the user except that the operation failed. A benign user isn’t going to do anything useful with any details, and a malicious user is hoping that you send as many details as possible.
  2. Never blame your customers for causing your server to generate an HTTP 500 Internal Server Error, as the fault is almost certainly yours, not your customers’.

In the end, I attempted the transfer again, and it worked correctly. I’ll never know if I got through the tech-support representative to the development team responsible for the code, though.

As one who works with software, it is no surprise to me that there are bugs like this in many applications. No development organization is immune. Every organization should take the quality and security of their applications seriously. I am not going to name the bank, but it is on this list of the fifty largest banks in the country as of 1Q 2014, according to the American Bankers Association.

Cheers!
Tim

The post Scheduling a Transfer appeared first on Software Testing Blog.

Categories: Companies

What’s New in TestTrack 2014.1 Documentation

The Seapine View - Wed, 07/09/2014 - 15:15

TestTrack 2014.1 is chock full of great new features and enhancements. We made lots of additions and changes to the TestTrack documentation to help guide you as you explore the new release. The following help topics will point you in the right direction.

Remember, documentation is always available on our web site. If you have documentation suggestions, please let us know.

Using the Home page
Also available in the web client
Explains information you can view on the new Home page, including recent project activity, your assigned items, and more. Also explains how to add widgets that a TestTrack administrator can configure for a project.

Configuring field value styles
Explains how to add, edit, and delete styles to spotlight important information in item lists and reports. You can use different colors, font styles, and icons in each style. Styles can be applied to list values and workflow states.

Creating new requirements from existing requirements
Also available in the web client
Explains how to create new requirements from existing requirements, which can help improve artifact reusability and traceability. TestTrack administrators configure item mapping rules to specify the information copied to new requirements and enforce rules, such as adding new requirements to a document or folder.

Integrating with Source Control Tools
Explains the source control integrations available with TestTrack and where to get more information about using them. The integration with Git, GitHub, and other providers allows users to attach source files to TestTrack items when pushing changes to the source control server. You can also find information about existing integrations with Surround SCM, CVS, Perforce, Subversion, and Microsoft Visual Source Safe.

Exporting project configuration reports
Explains how to export details about a project’s configuration to Microsoft Word, which is useful for validation. If you want to use a different look or wording for the report template, you can modify it.

Creating matrix reports
Explains how to enhance matrix reports by adding columns that contain the same contents of another column, adding an extra header and merging cells to group related columns, and selecting options to display text using styles configured for field values.

Trend report types
Provides details about each trend report type, including the new ‘Items in each workflow state when the period ended’ option.

Setting security options
Explains how TestTrack administrators can enable encryption and key exchange for secure TestTrack client/server communication. You can also read details about how TestTrack encryption, authentication, and key exchange work.

Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

HPC market to grow significantly in coming years

Kloctalk - Klocwork - Wed, 07/09/2014 - 15:14

The market for high performance computing solutions continues to accelerate. Organizations around the globe are increasingly turning to these systems in order to vastly upgrade the speed, quality and range of their computation capabilities, a trend which is likely to continue into the foreseeable future.

According to the most recent MarketsandMarkets report, the global market for HPC solutions will likely reach $33.4 billion by 2018. By comparison, this market stood at $24.3 billion in 2013. This marks an estimated compound annual growth rate of 6.6 percent for this period.

The study reported that North America will see the greatest spending and adoption of HPC solutions and services during this time.

Diverse uses
The MarketsandMarkets report noted that the growing and evolving uses of HPC solutions are the most significant driver of market development. Specifically, the study highlighted the increase in the sheer number of complex applications for which HPC solutions are now being applied. Additionally, government investment in HPC systems is increasing rapidly, greatly accelerating the market's overall growth.

Ultimately, the single biggest reason for the HPC market's significant development is that the technology is advancing, providing new, major benefits for organizations.

"HPC has resolved the grand scientific challenges and enabled the enterprise to make sound business decisions. This has resulted in emergence of a new breed of dedicated HPC vendors, providing robust and scalable HPC clusters which can store, analyze, and process data at the shortest possible time," MarketsandMarkets noted.

New opportunities
Furthermore, the MarketsandMarkets study revealed that HPC solutions providers are poised to reap significant benefits during this time.

"Companies providing HPC solutions are looking forward to gain a better competitive advantage in this growing market, thereby creating exascaling supercomputers and embedded processors, new networking and hot water cooling technologies for the government organizations and enterprises," the report explained.

As more organizations begin to leverage HPC solutions for the first time or to a greater degree than ever before, the need for advanced debugging software will also increase tremendously. With sophisticated memory and process analysis tools, firms can ensure that their HPC solutions perform at their maximum capabilities. Without such resources, though, the task of maintaining these advanced computing networks can prove overwhelming, thereby greatly mitigating the computation benefits that companies can enjoy from their investments.

Categories: Companies

Knowledge Sharing

Telerik Test Studio is all-in-one testing solution that makes software testing easy. SpiraTest is the most powerful and affordable test management solution on the market today