Skip to content

Feed aggregator

UrbanCode Deploy supporting DevOps for Enterprise Systems

IBM UrbanCode - Release And Deploy - Mon, 07/07/2014 - 17:52

This post authored by Rosalind Radcliffe, Distinguished Engineer and Chief Architect for CLM and DevOps. It was originally posted

Innovate started with that great demo experience as described in my last blog entry, but that was just the start.  DevOps for Enterprise Systems and more specifically the new UrbanCode support for z/OS were the talk of the week.  On Monday, as part of the demo, we announced support for z/OS in UrbanCode Deploy.  This is a major step forward in providing true multiplatform  deployment support with a single tool.  I was flooded with questions throughout the week on this support.  Based on the excitement, and the questions, providing a single product that can do multiplatform deploy, without having to rely on other products for the z/OS support, is a big hit.

Why is it a big hit?  UrbanCode Deploy provides a unified solution for the continuous delivery in a z/OS inclusive heterogenous environment. It helps accelerate delivery and reduce cycle time to develop and test applications.  It reduces cost and minimizes risk by providing a single solution that is deploying to all environments.

Let’s take a walk through the new capability for z/OS in UrbanCode Deploy.

UrbanCode Deploy has updated the existing Java agent to also run on z/OS.  It’s the same agent running on all platforms, so for those of you familiar with it’s current install, the z/OS agent installs the same way.  For those of you not already familiar with UrbanCode Deploy, the agent is a Java process that installs into the hierarchical file system. The agent runs with RACF or other SAF compliant credentials.  In order to provide the security granularity usually expected on z/OS, we require an agent be installed and configured for each of the levels in your promotion hierarchy, ie Dev, TEST, UAT, PreProd, Prod.  These agents can then be used for all your z/OS application deployments to all your environments, assuming you have shared DASD configured for your systems.  If you have LPARs that are not connected via shared DASD, you simply install agents on those systems as well. Remember, one agent per level in the hierarchy.

UrbanCode Deploy provides a new component type:  z/OS.  This component type has the understanding, by default, it’s going to be handling incremental deploys.  The UrbanCode Deploy z/OS toolkit provides the command line capabilities to be called from your existing z/OS build system to create a new version of the component in UrbanCode Deploy.  This new UrbanCode Deploy z/OS toolkit creates the version in Code Station as a reference and stores the artifact in the new Code Station on z/OS.  Code Station is the UrbanCode artifact repository.  For traditional z/OS components, instead of copying the files to the UrbanCode servers Code station, a secondary code station is created on the z/OS build server to keep the z/OS parts on the z/OS system.

Once the version is created, it can be deployed.  Now it’s just like any other application deployment, you use the same UrbanCode Deploy graphical process editor to create the deployment process.  There are a few new z/OS specific process steps that have been provided to support the z/OS native deployments.  These new process steps are:  z/OS Deploy, Copy to a PDS, Allocate a new PDS, FTP deployment to another system, and Run a TSO or ISPF command.  These process steps have the added ability to run once for a deployment, or once per member in the deployment.

Now you define your application process and perform your deployments as you would with any other component in UrbanCode Deploy.  For example, lets say you have a WAS component that’s already been defined.  This component provides the front end to your application.   You also might have a component defined with the web services definitions for CICS, and lastly you have your new z/OS component that will contain the CICS Load modules, DBRMs etc.  You can add all of these to the application processes and deploy them together.  The WAS application could be running on z or any other platform.

There are obviously some very specific z/OS support capabilities as part of the latest UrbanCode Deploy offering, but they have been designed such that the user interaction for doing a deploy is the same no matter what type of system or combination of target systems, is being deployed to.  Your release teams no longer have to learn multiple different tool sets, a minimum one for z/OS and one for distributed. UrbanCode Deploy is one tool for the entire enterprise.  Also, all the deployment information, including logs or any errors are returned to the UrbanCode server so understanding what happened is clear, and if there is a problem there is no other tool logs to have to examine.

UrbanCode Deploy is just one part of the story when it comes to DevOps for Enterprise Systems.  It’s the latest in a series of capabilities we are providing to allow all your application teams to take advantage of modern tools when doing z/OS development, the same way your distributed teams have been doing for years. The key to the excitement is that we are providing capabilities for z/OS development, including the latest UrbanCode release, that provide a single way of doing the activities that are part of the application life cycle no matter the underlying target platform.  True DevOps for Enterprise Systems.

Categories: Companies

Mobile application security failures to be primary source of breaches in future

Kloctalk - Klocwork - Mon, 07/07/2014 - 15:00

The need for high-quality mobile security is growing rapidly. Consumers and businesses now rely on smartphones and tablets for an ever-increasing variety of critical tasks, and these devices inevitably must house and provide access to sensitive, valuable data. Mobile security breaches can put individuals and organizations at great risk of fraud, theft and more.

Mobile application security is essential in this capacity. If mobile apps are not fully protected, the users' devices will be vulnerable.

According to a recent Gartner study, this is a serious, escalating issue. Within a few years, mobile security breaches will predominantly be caused by mobile application misconfigurations.

Mobile app issues
The Gartner study noted that currently, mobile security breaches are relatively rare. However, this is not expected to remain the case, as more and more smartphones enter the market and are used more heavily. This will cause cyberattackers to turn their attention to mobile targets with growing frequency. By 2017, Gartner forecast that three-fourths of all mobile security breaches will be attributable to mobile application misconfiguration.

"Mobile security breaches are – and will continue to be – the result of misconfiguration and misuse on an app level, rather than the outcome of deeply technical attacks on mobile devices," said Dionisio Zumerle, principal research analyst at Gartner. "A classic example of misconfiguration is the misuse of personal cloud services through apps residing on smartphones and tablets. When used to convey enterprise data, these apps lead to data leaks that the organization remains unaware of for the majority of devices."

Gartner noted that these attacks will mostly center on mobile devices that have been altered at the administrative level. Zumerle explained that the most common examples of such manipulation are the jailbreaking of iOS devices and rooting of Android devices. These actions remove app-specific protections, opening up the devices and their contents to potential mobile attacks.

Protecting mobile data
In recognition of this growing threat, Gartner offered a number of recommendations for IT security leaders looking to protect their companies' mobile users. Essentially, the report made it clear that business leaders should require their employees to take steps to increase the security of their devices.

Most obviously, Gartner recommended that IT leaders forbid personnel from jailbreaking or rooting their devices. Additionally, the source suggested that employees should not be allowed to utilized unapproved third-party app stores.

Gartner also emphasized the need to require signed apps and certificates when it comes to accessing business email, shielded apps and virtual private networks.

"We also recommend that they favor mobile app reputation services and establish external malware control on content before it is delivered to the mobile device," said Zumerle.

These guidelines and the general trend highlighted by Gartner emphasized the importance of establishing standards when it comes to application security. As mobile breaches become increasingly common, firms in every sector will inevitably develop stronger standards for app use, or else risk becoming a recurring target.

This poses both a challenge and an opportunity to mobile application developers. On the one hand, developers need to pay more attention to application security than ever before. If they fail to adequately address these issues and ensure the reliability of their mobile offerings, organizations and individuals will increasingly shun their products, choosing instead to utilize more secure alternatives.

However, by proactively embracing greater mobile security standards sooner rather than later, developers can position themselves as the most suitable option for businesses as they make thetransition toward a more security-conscious selection process.

Categories: Companies

.NET Developers Around The Globe

NCover - Code Coverage for .NET Developers - Mon, 07/07/2014 - 13:07

It’s always cool to discover the many walks of life .NET developers come from and the many threads (pun intended) that connect us.  As an active .NET development team ourselves, we appreciate the many contributions made by members of this great community!

Grant Palin

.NET Developers Grant PalinMeet Grant Palin, our renaissance man. His personal site has the tagline “code, photos, books and anything else.” It is a great place to get inspired or learn about new tools from a diverse field. A native to Victoria, BC, Grant spent his early years reading anything he could get his hands on and that tendency continues today. He has taken on hobbies such as drawing, photography and writing. His true passion, however, is in information technology. As a big fan of continuous learning, Grant fundamentally believes in continuing to push his boundaries by consuming literature, practicing and experimenting, taking courses, discussing with others, and attending presentations and conferences. Make sure to check him out at or follow him on twitter @grantpalin.

Simone Chiaretta

.NET Developers Simone ChiarettaSimone is a software developer & architect coding on the .NET platform, both for business and for fun, since 2001 back when .NET was still in Beta version. He has lived across the globe from Italy to New Zealand, climbing more than a couple mountains along the way. No matter where he goes, Simone remains an active member of his community in Italy. He talks at various user groups and writes technical articles both for online and print magazines. When he is not writing code, blog posts or taking part in the worldwide .NET community, you may find him on the shortest path up to a
mountain, which usually is a vertical one. Free-climbing, mountain climbing and ice climbing are just a few of the ways he makes sure he is moving up. Follow his .NET expertise and mountain views on his blog: or on twitter at @simonech.

The post .NET Developers Around The Globe appeared first on NCover.

Categories: Companies

The Role of Middle Management in an Agile World

When discussing Agile roles, there is much written about the Scrum Master, Product Owner, Development Team, and Customer.  But there is little written about what the Middle Manager should do in an Agile World.   Note, when I talk about Middle Manager, I am talking about the Line Manager, Functional Manager, Manager, and Director level manager. 
I recently discussed the middle management roles within an Agile context to several different middle managers.  They each had an interesting perspective on what it was like when their teams became Agile.   Here are two excepts:  
  • The Functional Manager who was also the team's Line Manager noted that they spent much less time on directing the team on what to work on since the work was now coming out of the Product Backlog.  I told him that yes this is big adjustment.  He needed to focus on ensuring that his team members had the right skills, understood the Agile principles, and were given the education they needed to become a fully cross-functional team.
  • In talking to a Director who now has 3 self-organizing teams, she was telling me that she was having a hard time knowing what to do since she felt she had to get more hands-on.  I told her by backing off, helping educate the team members around their new roles, and then allowing the teams to self-organize around the work was the right thing to do.  She needed to provide more vision level focus to connect the organization’s strategies to the product visions.  She commented that this was very different from the more traditional management role she had been used too.  

Ultimately, it is important to understand that middle management are critical to the success of an effective Agile deployment.  They are the lynchpin between the executive’s vision for Agile and middle management's willingness to allow Agile to thrive on a team. If they are engaged and buy into Agile, then the change may succeed. Even when executives buy in, if middle management does not do likewise, they can block a team's ability to succeed with Agile.    

If middle management don’t understand their role in the new order or feel threatened by the change, they may become Deceivers or Deniers and block the success toward Agile. Because of this, it is critical that middle managers are educated on Agile at the same time their teams are
Middle management must adapt their role and learn to gently back away from their functional leadership, act more as servant leaders who trust their teams, help them remove roadblocks, and support the agile principles and practices. They may attend the Sprint Review to see progress of the working functionality and the Daily Stand-up to gain a sense of team progress.   Middle management must also learn how to establish the concept of bounded authority where teams can make their own decisions and commit to their own work. The balance is that managers keep limited responsibilities to provide a vision and support their staff, while allowing teams ownership of their work.  Finally, middle management must be willing to be transparent about what is going on in the organization and be willing to communicate this information to the team. 
Other Roles that may suit Middle Management
Often middle management have less to do in an Agile world. The good news is that they may consider options such as changing their role to Resource Manager, where they manage more people but do not own an organizational functional area. They may consider a Product Owner role if they have been engaged in collecting requirements and interacting with customers. Although this role should no longer be managerial (i.e., not direct reports), a PO helps shape the product by collecting and grooming the requirements and collaborating with the team.   They may also move to another Functional Manager role where there is still a need for this role.  And some will remain their current middle management leadership roles.  If they continue to want to do the more traditional middle management roles, they may consider looking for companies that continue to look for the more traditional roles.  
How Middle Management can evaluate themselves in an Agile World
Here are a few questions middle managers can ask themselves to see how aligned they are in managing teams in an Agile World.
  • Are you allowing for self-organizing teams while still providing servant leadership? 
  • Are you removing command and control elements while providing bounded authority?
  • Are you supporting the Agile values and principles starting with marshaling a culture toward delivering value?
  • Do you remove the language of false certainty, big-up-front planning and requirements, and big batches?
  • Do you remove the significant roadblocks that hinder an agile team’s progress?
  • Do your teams perceive you as a coach and leader more than as a manager?
  • Are you helping the team with supporting their people and equipment needs?
  • Are you adapting the performance objectives to support team accomplishments to ensure they are delivering the highest value?
  • Do you help the teams when they have external team dependencies in order to get their work done?
  • Are you fostering a learning organization?  Do you provide teams the time to get educated (training, coaching, etc.)? 

Categories: Blogs

Heartbleed discovery highlights need for security testing

Kloctalk - Klocwork - Sun, 07/06/2014 - 21:00

The discovery of the Heartbleed security flaw shocked many cybersecurity experts. It is undoubtedly the most significant vulnerability ever revealed in a widely used open source software solution. The fact that these programs were used by so many companies for so long before the flaw was detected shook confidence in open source security in general.

However, according to most industry experts, Heartbleed was an outlier and does not suggest any inherent problems with open source security. Still, some, such as InformationWeek Executive Editor Srikanth RP, believe that this revelation should encourage organizations to refocus their efforts on security testing when utilizing open source solutions.

Testing needs
Srikanth noted that Heartbleed has generated a great deal of discussion as to whether open source security will actually prove sufficient for companies in the future. While many had previously predicted that open source will ultimately supplant all other forms of software development, including for cybersecurity programs, Heartbleed created fears that overlooked vulnerabilities may compromise security.

Yet studies suggest that open source solutions are actually more secure than proprietary offerings, the writer explained. He pointed to a recent Coverity report which found that open source projects typically feature a defect density of 0.59 per thousand lines of code. Proprietary solutions, on the other hand, have an average defect density of 0.72 per thousand lines. And as Srikanth noted, defect density is frequently used as a clear marker of software quality. This suggests that open source offerings generally have fewer potential security vulnerabilities than proprietary counterparts.

According to the writer, the message that company decision-makers should take away from all of this is that open source security is very achievable, but a renewed focus on testing is essential.

"The reality is that every open source project must be tested before being deployed – and it is the responsibility of developers, security experts and the dozens of big corporations that bundle in open source software with their software or hardware systems," Srikanth wrote.

This is especially true when firms rely upon open projects that lack sufficient volunteers. If a project, such as OpenSSL, doesn't have enough people to oversee it, the possibility of bugs being present skyrockets. With enough eyes on the project, these risks diminish.

In any event, companies can and should perform their own in-house testing of open source solutions before and during deployment, in order to ensure that these programs are sufficiently secure at all times.

The future of open source security
Going even further than Srikanth, industry expert Steven J. Vaughan-Nichols, writing for ZDNet, asserted that Heartbleed, while eye-opening, will not exert any real, lasting influence on the future of open source security. This strategy has already won the battle for prominence over proprietary options.

"Outside of Apple and Microsoft, everyone, and I mean pretty much everyone, has already decided that open source is how they'll develop and secure their software," Vaughan-Nicholas wrote. "Google, Facebook, Yahoo, Wikipedia, Twitter, Amazon, you know all of Alexa's top ten websites in the world, rely on open-source software every day of the year."

Vaughan-Nichols explained that Heartbleed occurred simply because the project was underfunded and users did not follow best practices, including those highlighted by Srikanth. When examined thoroughly by enough personnel, open source solutions become incredibly reliable and secure. 

"Put it all together and the facts show that, when done right, open source is the best way not just to develop software but to create secure software," Vaughan-Nichols concluded. "It's only in those corner cases, like OpenSSL with Heartbleed, where a program is both popular and under-funded, that there exists the real possibility of a major security problem."

Now that Heartbleed has occurred, the likelihood that similar mistakes will be made again is extremely low.

Categories: Companies

Agile for BI: A crash course

Kloctalk - Klocwork - Sun, 07/06/2014 - 16:00

The rise of Agile development is among the most significant software trends of recent years. With Agile strategies in place and supported by code review tools, firms can create and improve a wide range of software types far more efficiently and effectively than would ever be possible with legacy development methods.

As Agile development becomes more widely known and embraced, the possible applications will continue to increase and expand beyond the bounds of traditional application creation. Notably, John Harmann, consulting principal for CBIG, recently highlighted six key guidelines for leveraging Agile development tools and strategies for business intelligence (BI) purposes.

Agile BI
Writing for Information Management, Harmann noted that while Agile development and BI are not often thought of together, in reality Agile may be better suited for BI projects than most other potential applications. However, it is essential that firms apply the right strategies when pursuing these efforts.

According to Harmann, the ideal cycle for Agile BI development is three weeks. He explained that with such a schedule, the team can perform 2.5 weeks of work, then spend a few days on design and retrospectives. Thanks to code review tools and related technologies, no additional time is required to examine the BI project for errors.

Harmann also emphasized the importance of data when using Agile for BI. He highlighted the need to see data as features, rather than focusing on reports.

Furthermore, Agile BI project development requires a plan for refactoring, Harmann said.

"In a BI project, you'll often uncover many of your real data issues once you've built your complete star schema," he wrote. "Then you can write and perform queries to slice and dice data in different ways."

High-quality code refactoring solutions can prove critical for this purpose.

According to Harmann, it is also essential to have a thorough understanding of your constituency before beginning any Agile BI development project. Analytics or SQL experts may eliminate the need for a longer or internal cycles, instead enabling a sprint with a period for data analysis and demos of queries.

Agile BI developers must also include a willingness to reconsider and adapt, Harmann argued.

"Regardless of the meeting name or approach to doing so, one of the key tenets of Agile development is refining your approach and adapting to change," he wrote. "That means looking at what you did, thinking about how you can improve and continually getting better."

Finally, Harmann emphasized that Agile BI development must be agile. That is to say, there is no definitive guide to these projects. On the contrary, decision-makers must incorporate their own experiences and unique needs and goals when pursuing such efforts, rather than following a hard-and-fast playbook.

The end in mind
One additional concept worth bearing in mind as firms pursue BI projects with Agile development is the importance of having an end-goal from the very beginning. As a recent uTest report explained, Agile practitioners must understand the big picture and develop definitive objectives if their efforts are to bear fruit. Without such clarity, these projects are liable to become unfocused, delivering less value for the organization.

This is particularly important because, as the report explained, many developers are tempted to seek out the latest, most cutting-edge technologies when leveraging Agile. As a result, they may end up using solutions that are not ideal for the particular needs of this specific endeavor. An holistic approach to Agile development for BI is far more likely to lead to optimized results.

Categories: Companies

DevOps unites development and operations teams

Kloctalk - Klocwork - Sat, 07/05/2014 - 15:46

As departmental silos continue to present difficulties for enterprise IT, limiting the flow of communications between teams and reducing collaborative opportunities, decision-makers in a range of industries are seeking techniques to lessen the severity of these barriers. One increasingly popular method used to chip away at the limitations of silos has been the implementation of DevOps, a tool set that diminishes gaps between the free-form work styles of development teams and the more regimented schedules of operational and maintenance staff. According to a recent article from ZDNet, outsourcing IT projects has become a less prominent strategy as DevOps strategies allow for focused collaboration and code review among disparate internal teams.

"Insourced" operations drive innovation
By bringing software development back within the walls of the enterprise, business leaders are finding their IT staff better able to work together toward innovative breakthroughs, largely due to the team-based approach to coding that DevOps provides. As cloud services allow more processing power and hard disk memory to experiment with, companies are less apprehensive to leverage more cutting-edge initiatives that merge the creative mindsets of development squads with strict and structured operations personnel. ZDNet also pointed out that many organizations are switching out their traditional large-scale cloud contracts for specialized systems that allow for greater levels of customization and an emphasis on network visibility, giving internal teams more precise control over their tech resources. 

Another advantage afforded by DevOps methodologies is the ability for executives to cut redundant systems and personnel from their payrolls, allowing budgets to open up and funds to be allocated toward more innovative strategies. In a rapidly evolving tech landscape, any opportunity to gain a differentiating edge over competing firms must be pursued without hesitation, and leaner IT teams grant businesses much-needed agility to rise to the occasion when given the chance. As ZDNet reported, Steve Shah of NetScaler explained that companies are realizing they may need "just a few people with a higher level understanding of business needs and the insight to convert ideas into automation scripts."

Cross-platform technologies get a boost

Since cloud architectures have become the go-to environment in which developers create and refine new software projects, the demand for peer code review processes has risen along with a revitalized attitude toward experimentation. According to eWeek, IT leader Microsoft is at the cutting-edge of serving these exact needs, offering a DevOps experience as a part of its Azure cloud services to be used in conjunction with its development platform, Visual Studio. With an increasing corporate focus on mobile device management and heightened security measures, coding consoles protected by private, off-premise cloud networks can be just the ticket for many companies to tap into the innovate potential of their development teams and make strategies such as bring-your-own-device a profitable reality. 

"Developing for a mobile-first, cloud-first world is complicated, and Microsoft is working to simplify this world without sacrificing speed, choice, cost or quality," said Scott Guthrie, executive vice president at Microsoft, eWeek reported. "Imagine a world where infrastructure and platform services blend together in one seamless experience, so developers and IT professionals no longer have to work in disparate environments in the cloud. Microsoft has been rapidly innovating to solve this problem, and we have taken a big step toward that vision today."

With the introduction of Visual Studio Online and other cloud-based tools designed to workshop new ideas in a secure setting, developers can reach new levels of creative inspiration that could lead to the next big breakthrough for their companies, and the IT world at large. 

Categories: Companies

Configuring Superscribe to a self-hosted OWIN application

Decaying Code - Maxime Rouiller - Fri, 07/04/2014 - 22:28

We’ll start from my previous post with a single console application with a self-hosted OWIN instance.

The goal here is to provide a routing system so that we can route our application in different section. I could use something like WebAPI but the routing and the application itself are tightly linked.

I’m going to use a nice tool called Superscribe that allow to do routing. It’s Graph based routing but it should be simple enough for us to hook it up and create routes.

Installing Superscribe

Well, we’ll open up the Package Manager Console again and run the following command:

Install-Package Superscribe.Owin

This should install all the proper dependencies to have our routing going.

Modifying our Startup.cs to include Superscribe

First thing first, let’s get rid of this silly WelcomePage we created in the previous post. Boom. Gone.

Let’s create some basic structure to handle our routes.

using Microsoft.Owin;
using Owin;
using Superscribe.Owin.Engine;
using Superscribe.Owin.Extensions;

[assembly: OwinStartup(typeof(MySelfHostedApplication.Startup))]

namespace MySelfHostedApplication
    public class Startup
        public void Configuration(IAppBuilder app)
            var routes = CreateRoutes();

        public IOwinRouteEngine CreateRoutes()
            var routeEngine = OwinRouteEngineFactory.Create();
            return routeEngine;

So this code basically create all the necessary plumbing for Superscribe to handle our requests.

Creating our routes

So we now have a route engine to work with. So let’s first create a handler for the default “/” URL.

We’ll also create a route for “/welcome” to use our default WelcomePage that we had earlier (just for demo purpose).

We’ll also create a route for “/Home” that will return a plain text (for the moment).

Here’s what it looks like:

using Microsoft.Owin;
using Owin;
using Superscribe.Models;
using Superscribe.Owin.Engine;
using Superscribe.Owin.Extensions;

[assembly: OwinStartup(typeof(MySelfHostedApplication.Startup))]

namespace MySelfHostedApplication
    public class Startup
        public void Configuration(IAppBuilder app)
            var routes = CreateRoutes();

        public IOwinRouteEngine CreateRoutes()
            var routeEngine = OwinRouteEngineFactory.Create();
            routeEngine.Base.FinalFunctions.Add(new FinalFunction("GET", o => "Hello world"));
            routeEngine.Pipeline("Home").Use((context, func) =>
                context.Response.ContentType = "text/plain";
                return context.Response.WriteAsync("This is the home page");

            return routeEngine;

That was simple.

Basically, we build a simple pipeline to an OWIN middleware.

What about NancyFX with Superscribe after?

Categories: Blogs

Insurance & Technology: How Insurers Can Avoid “Black Swans” in Product Launches

Insurance & Technology, a US publication that “provides insurance business and technology executives with the targeted information and analysis they need to be more profitable, productive and competitive,” recently published a contributed article by Original Software CEO Colin Armitage. “We’ve all seen it happen: An IT project plagued with delays, changes and complications goes so […]
Categories: Companies

Predictive analytics poised to improve government operations

Kloctalk - Klocwork - Fri, 07/04/2014 - 15:00

Predictive analytics is, without a doubt, one of the most promising technologies to emerge in recent years. By leveraging these solutions, organizations in a wide range of fields can make better informed, more strategic decisions in just about every area of business.

This applies not only to the private sector, but the public as well. As FCW contributor Thom Rubel recently reported, predictive analytics is poised to deliver major improvements to government agencies. However, for this to occur, a concerted effort to embrace the technology is essential.

Predicting government
Rubel noted that there are a number of areas where governmental use of available data combined with predictive analytics could yield powerful results.

"For example, programs that are collectively designed to ensure the smooth flow of people and commerce are typically informed by multiple data sources generated by people or things (sensors, data networks, etc.)," Rubel wrote. "Predictive decision-making ensures that the right combinations of information come together based on business rules that optimize desired outcomes – think smooth traffic flows."

By embracing predictive analytics technology, the government could see its operational efficiency rise significantly.

Predictive challenges
However, as Rubel pointed out, there are serious challenges which must be overcome first. Put simply, the government needs to make progress in terms of making sense of its massive volume of available data and also ensure that the technologies used for predictive analytics can scale up and down as needed.

Part of the reason this is such a challenge is that, as a general rule, the government struggles to attract and retain the level of IT talent necessary to develop and implement such advanced technological solutions. Numerous reports have noted that up-and-coming IT experts typically veer toward the private sector because the incentives to join the government are just not competitive. Government agencies do not afford these personnel the level of freedom they require to innovate new solutions, including advanced analytics efforts. This makes it difficult for the government to take advantage of this and other technological progress.

Yet despite this and other obstacles, Rubel believes that the government will eventually utilize predictive analytics to a wide degree. Specifically, he forecast the Internet of Things will integrate with predictive analytics and government programs to deliver more sophisticated, effective governance. As the use of analytics for critical decision making grows, it becomes more important for organizations to rely on proven and robust algorithms to deliver results that can be trusted.

Learn more:
• Read this white paper to learn how analytics are used by different industries to create competitive advantages (PDF)
• Answer the question of which is costlier – building your own algorithms or buying them?

Categories: Companies

A Hackathon for Testers – A Testathon

The Social Tester - Fri, 07/04/2014 - 14:29
I’m a big fan of Hackathons and ShipIt days as a mechanism for learning, but also as an opportunity to bring people together to share ideas. That’s why I’m liking the look of this – a Testathon. The next ones at Spotify’s head office in Sweden. More details on their website.
Categories: Blogs

Anti-Pattern: Fixing Configuration “As-Broken”

IBM UrbanCode - Release And Deploy - Thu, 07/03/2014 - 23:30

In the webinar Death to Manual Deployments we highlight a common problem in enterprise IT: configuration updates to middleware and applications are made on an “as-broken” basis. A developer will change the application to need a configuration tweak, which she makes on her own laptop. When the code is submitted a few days later the first test environment start showing errors. After a defect is raised, the developer informs QA of the change they need to make and its made. A week later, the application is promoted on to another testing environment where the application fails, defects are raised and eventually someone remembers to make the fix. Hopefully, someone gets this added to the release plan before the production release, but outage windows being extended due to this pattern is not unheard of.

The basic strategy to fix this challenge is to drive it into the release process as quickly as possible. Ideally the only way to make these kinds of configuration changes in any of your testing environments is through your deployment automation tool, such as UrbanCode Deploy. This will force the change to get captured, and make it easy to bind the configuration change with the application change that requires it. Otherwise, the policy should be that any kind of configuration change isn’t permitted unless it’s shown to be captured in the release plan. UrbanCode Release has a nice way of capturing these kinds of manual changes. Developers and testers are generally given  access to the lower environment deployment plan for the Release they are working on. Either the original programmer or the first tester to find the problem would update the plan with the instructions for making the configuration change. The new task would be flagged to run only in environments that hadn’t had the changed applied yet, and it would be suggested as a task to add to the production release plan. Easy to capture and easy to manage.

 Fixing Configuration As Broken

Categories: Companies

What's new in ApprovalTests.Net v3.7

Approval Tests - Thu, 07/03/2014 - 22:56
[Available on Nuget]
AsyncApprovals - rules and exceptions[Contributors: James Counts]In the end, all tests become synchronous. This means for a normal test we recommendHowever, If you are looking to test exceptions everything changes and you might want to use Removed BCL requirement[Contributors: James Counts &  Simon Cropp]HttpClient is nice way of doing web calls in .Net. Unfortunately, at this time the BCL in nuget does unfortunate things to your project if you do not wish to use the HttpClient. This is a violation of a core philosophy of approvaltests 
"only pay for the dependencies you use
HttpClient was add ApprovalTests 3.6. Thanks to Simon for pointing and troubleshooting this error. It has now been removed. 
Wpf Binding Asserts[Contributors: Jay Bazuzi]This is a bonus from v3.6It is a very hard thing to detect and report Wpf Binding Error. To even get the reports to happen you have to fiddle with the registry and then read and parse logs.No More!  Now to you use BindsWithoutError to ensure that your Wpf binding are working.

Categories: Open Source

Automation in Testing the Subject of Latest Engaging STP Podcast

uTest - Thu, 07/03/2014 - 20:20

uTest has always had a strong relationship with the Software Test Professionals (STP) community as attendees and sponsors of STP’s twice-a-year STPCon conferences in the US, some of the largest shows in the testing industry.

This week, STP brings us pre-recorded testing fun in the form of a podcast. Testing expert Richard Bradshaw talks with STP on the subject of automation in testing. Specifically, Richard gets into where automation comes into play as a manual tester, and how managers can build successful teams comprised of developers and both manual and automated testers, and keep everything running smoothly.

Check out the full audio of the great STP interview below.

Categories: Companies

Jenkins User Event & Code Camp 2014, Copenhagen

This is a guest post from Adam Henriques.

On August 22nd Jenkins CI enthusiasts will gather in Copenhagen, Denmark for the 3rd consecutive year for a day of networking and knowledge sharing. Over the past two years the event has grown and this year we are expecting a record number of participants representing Jenkins CI experts, enthusiasts, and users from all over the world.

The Jenkins CI User Event Copenhagen has become cynosure for the Scandinavian Jenkins community to come together and share new ideas, network, and harness inspiration from peers. The program offers invited as well as contributed speaks, tech talks, case stories, and facilitated Open Space discussions on best practice and application of continuous integration and agile development with Jenkins.

The Jenkins CI Code Camp 2014

The Jenkins CI User Event will be kicked off by The Jenkins CI Code Camp on August 21st, the day before the User Event. Featuring Jenkins frontrunners, this full day community driven event has become very popular, where Jenkins peers band together to contribute content back to the community. The intended audience is both experienced Jenkins developers and developers who are looking to get started with Jenkins plugin development.

For more information please visit the Jenkins CI User Event 2014, Copenhagen website.

Categories: Open Source

Applause Collaborates on Mobile Application Quality Solutions With IBM

SQA Zone - Thu, 07/03/2014 - 18:25
Applause has announced an ongoing technology collaboration with global solutions leader, IBM. Through this teaming, the two firms have co-developed mobile app quality solutions that enable companies to improve app quality and delight their mobile ...
Categories: Communities

JUC Berlin summary


After a very successful JUC Boston we headed over to Berlin for JUC Berlin. I've heard the attendance number was comparable to that of JUC Boston, with close to 400 people registered and 350+ people who came.

The event kicked off at a pre-conference beer garden meetup, except it turned out that the venue was closed on that day and we had to make an emergency switch to another nearby place, and missed some people during that fiasco. My apologies for that.

But the level of the talks during the day more than made up for my failing. They covered everything from large user use cases from BMW to Android builds, continuous delivery to Docker, then of course workflow!

One of the key attractions of events like this is actually meeting people you interact with. There are all the usual suspects of the community, including some who I've met for the first time.

Most of the slides are up, and I believe the video recordings will be uploaded shortly, if you missed the event.

Categories: Open Source

Applause and IBM Collaborate on Mobile Software Testing

Software Testing Magazine - Thu, 07/03/2014 - 18:12
Applause has announced an ongoing technology collaboration with IBM. Through this teaming, the two firms have co-developed mobile app quality solutions that enable companies to improve app quality and delight their mobile users. These offerings take the form of both on-premise and cloud-based solutions. Applause worked closely with IBM’s MobileFirst and Rational Software technology teams to develop solutions that help companies better achieve mobile app quality that aligns with users’ perspectives. Applause and IBM will also work together on thought leadership activities to advance the market’s knowledge around the value of ...
Categories: Communities

Pictures from JUC and cdSummit

I've uploaded pictures I've taken during JUC Boston and JUC Berlin.

JUC Berlin pictures starts with pre-conference beer garden meet-up. See Vincent Latombe gives a talk about Literate plugin. I really appreciated his coming to this despite the fact that the event was only a few days before his wedding:

In JUC Boston pictures, you can see some nice Jenkins lighting effect, as well as my fellow colleague Corey Phelan using World Cup to lure attendees into a booth:

IMG_8721 IMG_8745

Pictures from the cdSummits are also available here and here.

If you have taken pictures, please share with us as your comment here so that others can see them.

Categories: Open Source

Jenkins Office Hours: dotCi

Surya walked us through the dotCI source code yesterday, and a bunch of ideas about how to reuse pieces are discussed. The recording is on YouTube, and my notes are here.

Categories: Open Source

Knowledge Sharing

Telerik Test Studio is all-in-one testing solution that makes software testing easy. SpiraTest is the most powerful and affordable test management solution on the market today