Skip to content

Feed aggregator

How to find if object is displaying in web page with CodedUI (C#)

Below code will help to determine that if object is displaying or not… BrowserWindow browser = BrowserWindow.Launch(new System.Uri(“http:www.google.com”)); UITestControl Object = new UITestControl(browser); Object.SearchProperties.Add(“Id”, “Object_ID”); String objStatus = Obj.Exists.ToString(); if (objStatus == “True”) { Console.WriteLine(“Next button is available”); } else { Console.WriteLine(“Next button is not available”); }
Categories: Blogs

How to read html table cell data with CodedUI (C#)

Testing tools Blog - Mayank Srivastava - Sat, 09/13/2014 - 19:03
Below code will help to get the data from table cell… BrowserWindow browser = BrowserWindow.Launch(new System.Uri(“http://YourWebApplicationURL.com&#8221;)); HtmlTable table = new HtmlTable(browser); //Below line will identify the table object. table.SearchProperties.Add(“Id”, “Table_ID”); for (int i = 1; i <= 1; i++) { for (int j = 1; j <= 8; j++) { HtmlCell cell = new HtmlCell(table); […]
Categories: Blogs

Speaking “Innovation” for performance engineering: HP at Velocity Conference

HP LoadRunner and Performance Center Blog - Fri, 09/12/2014 - 20:14

 

There is still time! You don’t want to miss it! You can still attend the Velocity Conference in New York City. This event is unlike any other application performance conference you will attend.

 

Keep reading to find out how you can connect with us and to find out what exciting sessions we are bringing to the event. 

 

Email Tag.png

Categories: Companies

Load Testing Not Performed in Most Organizations: Should it be an Optional Affair?

uTest - Fri, 09/12/2014 - 17:57

We’ve all seen the disastrous results of not properly load testing and sites not being able to shoulder the traffic — the healthcare.gov site crashing in the United States is one load-testingexample where people’s livelihoods were actually put at risk (e.g. this wasn’t someone being inconvenienced today while pre-ordering the iPhone 6).

So you’d think that more organizations would be taking load testing seriously as part of the software development process, given the bottom-line risks to the business. However, according to a Software Testing Magazine report citing a survey from the Methods & Tools software development magazine, only 24% of organizations load test all of their projects, and even as high as 34% don’t perform any load or performance testing.

I’d be interested to dig deeper into this report, because it isn’t clear if this is a widespread issue in software development, or just in certain sectors. For example, organizations that make up this survey respondent pool may want to re-think their load testing strategies if they’re in industries with a low tolerance for crashes or slow site performance — i.e. retail. Nonetheless, this is still a surprising number.

Is load testing just an optional step for software development organizations? Or have they still not learned with the number of high-profile site crashes as of late? We’d be interested to hear from you in the comments below.

Categories: Companies

The Software Tester's Greatest Asset

I interact with thousands of testers each year. In some cases, it's in a classroom setting, in others, it may be over a cup of coffee. Sometimes, people dialog with me through this blog, my website or my Facebook page.

The thing I sense most from testers that are "stuck" in their career or just in their ability to solve problems is that they have closed minds to other ways of doing things. Perhaps they have bought into a certain philosophy of testing, or learned testing from someone who really wasn't that good at testing.

In my observation, the current testing field is fragmented into a variety of camps, such as those that like structure, or those that reject any form of structure. There are those that insist their way is the only way to perform testing. That's unfortunate - not the debate, but the ideology.

The reality is there are many ways to perform testing. It's also easy to use the wrong approach on a particular project or task. It's the old Maslow "law of the instrument" that says, "I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail."

Let me digress for a moment...

I enjoy working on cars, even though it can be a very time-consuming, dirty and frustrating experience. I've been working on my own cars for over 40 years now. I've learned little tricks along the way to remove rusted and frozen bolts. I have a lot of tools - wrenches, sockets, hammers...you name it. The most helpful tool I own is a 2-foot piece of pipe. No, I don't hit the car with it! I use it for leverage. (I can also use it for self defense, but that's another story.) It cost me five dollars, but has saved me many hours of time. Yeah, a lowly piece of pipe slipped over the end of a wrench can do wonders.

The funny thing is that I worked on cars for many years without knowing that old mechanic's trick. It makes me wonder how many other things I don't know.

Here's the challenge...

Are you open to other ways of doing things, even if you personally don't like it?

For example, if you needed to follow a testing standard, would that make you storm out of the room in a huff?

Or, if you had to do exploratory testing, would that cause you to break out in hives?

Or, if your employer mandated that the entire test team (including you) get a certification, would you quit?

I'm not suggesting you abandon your principles or beliefs about testing. I am suggesting that in the things we reject out of hand, there could be just the solution you are looking for.

The best thing a tester does is to look at things objectively, with an open mind. When we jump to conclusions too soon, we may very well find ourselves in a position where we have lost our objectivity.

As a tester, your greatest asset is an open mind. Look at the problem from various angles. Consider the pros and cons of things, realizing that even your list of pros and cons can be skewed. Then, you can work in many contexts and also enjoy the journey.


Categories: Blogs

It’s Time for Full Open Source Disclosure…

Sonatype Blog - Fri, 09/12/2014 - 15:31
We are not the first industry to face this challenge. But many are convinced our problem is much smaller than it really is or that it does not exist. They simply ignore it. Or choose to do nothing about it. Meanwhile, the problem is multiplying like rabbits. The challenge lies within our...

To read more, visit our blog at blog.sonatype.com.
Categories: Companies

HPC essential for blockbuster films

Kloctalk - Klocwork - Fri, 09/12/2014 - 15:00

For a while, high performance computing tools were used almost exclusively for advanced research at universities, government facilities and the like. Now, though, the technology has become much more affordable and widely available, and countless organizations in other fields regularly leverage these tools.

The film industry is a case in point. As Wired's David Beer recently highlighted, today's summer blockbusters increasingly rely on HPC solutions to continue to deliver realistic, impressive CGI and other visuals.

HPC animation
Beer pointed to Pixar's "Brave" as a key example of the role that HPC tools now play for filmmakers. The film features a protagonist, Merida, whose most identifiable trait is a tremendous amount of bright, curly red hair. According to Beer, the animators were very focused on ensuring that Merida's hair appeared as realistic as possible, and built a hair simulation engine specifically for this purpose. The author argued that her hair, along with all of the grass and other characters' hair, demonstrate much more life-like motion than other animated films. This upgrade is attributable to the use of HPC tools in the Pixar animation studios.

Beer noted that in the past, animators hoping to demonstrate the effects of the wind on hair or other objects needed to specifically craft those items in motion. With HPC tools, though, animators instead create algorithms that dictate how the wind will affect everything in a given frame. This requires a tremendous amount of computing power – for example, the film "Monsters University" reportedly required 100 million hours of CPU rendering. Last year, Pixar had 2,000 servers with more than 24,000 cores, leading one employee to estimate that the studio possessed one of the 25 most powerful supercomputers in the world.

Live action issues
The impact of HPC in the film industry extends well beyond Pixar and other animation-focused studios. Beer pointed out that virtually every major blockbuster in recent years has incorporated HPC tools in some capacity. For example, a single HPC production company provided effects for "Avatar," "The Avengers" and the "Lord of the Rings" franchise – all of which are among the highest-grossing films of all time.

As movies' special effects become more and more impressive, the bar keeps ascending. According to the writer, this year's "Dawn of the Planet of the Apes" demanded 10 times as much computing power as was needed for 2005's "The Lord of the Rings: The Return of the King."

HPC challenges
This trend poses a major problem for many film studios and production companies. Put simply, it is difficult for these organizations to obtain the HPC power they need at an affordable price. In most cases, these studios need to use advanced HPC and its attendant servers for only a relatively small period of time – for most of the year, the solutions remain dormant. However, as Beer pointed out, cloud services are not typically an option, as most vendors cannot spare 10,000 servers for a few weeks or months.

Another issue that must be addressed is debugging. An advanced visual debugger is essential for any organization leveraging HPC platforms, enabling developers to identify and correct any problems in the code that might otherwise go unnoticed. HPC-specific debuggers can reduce testing time and provide an overall boost to efficiency and productivity. For a big-budget film with a very tight release schedule crafted months in advance, such optimization efforts can play a key role in ensuring that post-production is a smooth process.

With the right tools in place, film studios can continue the progress they've already made and deliver increasingly impressive visual effects. 

Categories: Companies

Code Coverage: Myth vs Reality

NCover - Code Coverage for .NET Developers - Fri, 09/12/2014 - 13:10

myth_vs_reality_twitterThrough our work with developers and development teams over the years, we have found that there are certain myths about code coverage that still exist today. In an effort to ensure that your development and QA teams are all working towards the same goal, we wanted to both share and dispel a few of them. Here are some of the most common ones:

Myth #1 – There is a perfect code coverage score

We often get asked “what is the right code coverage number for our team?” That question implies that there is a “perfect” coverage code number, which is our first myth. Although it would be nice, coverage scores can range across the spectrum and are determined through your specific testing strategy, the critical path of our your code and the overall complexity of your code. In fact, the way you structure your modules and write your actual code impacts your coverage results. Although there isn’t a single perfect score, we invite you to watch best practices webinar for suggestions on how to achieve coverage results that work best for your organization.

Myth #2  - All code coverage metrics are the same

Earlier this week we had a prospect share with us, “we already have really high code coverage so we know our code is in good shape.” Although it’s great that they are measuring coverage, this leads to our second myth which is that all code coverage metrics are created equal. In this particular instance, the prospect was measuring line coverage. Line coverage and other metrics like method coverage fall into the category of metrics you can track but may not be particularly useful. In .NET, there may several sequence points in a single line, each with their own implications on the code. Does exercising any part of the line mean that you have full coverage?  In a word, “no.”  Not all code coverage metrics are created equal. We encourage you to understand your metrics and make decisions based on information rich metrics when it comes to the health of your code. You can find a quick refresher on code coverage metrics over here.

Myth #3 – 100% code coverage is 100% bug-free

Don’t get us wrong, if you have 100% branch coverage, you can feel pretty good that you are on the right path. In fact, we salute you!  However, our final myth, is exactly that, a myth. Remember, tests measure the quality of your code and code coverage measures the quality of your tests….as designed. Shakespeare referred to beauty as being in the eye of the beholder. As developers, we may feel that bugs are in the eye of the beholder. Is it performing as designed but not as the customer expects or wants? Are there new environmental or usage scenarios that didn’t exist when you published? Is the latest version of a web browser causing your web app to fail? These are questions that every development team faces sooner or later and although code coverage alone can’t solve these problems, it can help you ensure that as you engineer new solutions, those solutions perform as expected.

Understanding metrics like branch coverage, sequence point coverage, change risk anti-pattern score and how they fit into your testing strategy are all important in busting these myths and increasing both your confidence as developers and the quality of your code.

The post Code Coverage: Myth vs Reality appeared first on NCover.

Categories: Companies

How to get current page URL with CodedUI (C#).

Testing tools Blog - Mayank Srivastava - Fri, 09/12/2014 - 10:02
Below code will help to get current page URL by using C# (CodedUI). BrowserWindow browser = BrowserWindow.Launch(new System.Uri(“http:www.google.com“)); Microsoft.VisualStudio.TestTools.UITesting.HtmlControls.HtmlDocument PageObject = new Microsoft.VisualStudio.TestTools.UITesting.HtmlControls.HtmlDocument(browser); String URL = PageObject.PageUrl.ToString(); System.Console.WriteLine(URL);
Categories: Blogs

Zone of control vs Sphere of influence

Gojko Adzic - Fri, 09/12/2014 - 09:22

In The Logical Thinking Process, H. William Dettmer talks about three different areas of systems:

  • The Zone of control (or span of control) includes all those things in a system that we can change on our own.
  • The Sphere of influence includes activities that we can impact to some degree, but can’t exercise full control over.
  • The External environment includes the elements over which we have no influence.

These three system areas, and the boundaries between them, provide a very useful perspective on what a delivery team can hope to achieve with user stories. Evaluating which system area a user story falls into is an excellent way to quickly spot ideas that require significant refinement.

This is an excerpt from my upcoming book 50 Quick Ideas to Improve your User Stories. Grab the book draft from LeanPub and you’ll get all future updates automatically.

A good guideline is that the user need of a story (‘In order to…’) should ideally be in the sphere of influence of the delivery team, and the deliverable (‘I want…’) should ideally be in their zone of control. This is not a 100% rule and there are valid exceptions, but if a story does not fit into this pattern it should be investigated – often it won’t describe a real user need and rephrasing can help us identify root causes of problems and work on them, instead of just dealing with the symptoms.

zone_of_control

When the user need of a story is in the zone of control of the delivery group, the story is effectively a task without risk, which should raise alarm bells. There are three common scenarios: The story might be fake, micro-story or misleading.

Micro-stories are what you get when a large business story is broken down into very small pieces, so that some small parts no longer carry any risk – they are effectively stepping stones to something larger. Such stories are OK, but it’s important to track the whole hierarchy and measure the success of the micro-stories based on the success of the larger piece. If the combination of all those smaller pieces still fails to achieve the business objective, it might be worth taking the whole hierarchy out or revisiting the larger piece. Good strategies for tracking higher level objectives are user story mapping and impact mapping.

Fake stories are those about the needs of delivery team members. For example, ‘As a QA, in order to test faster, I want the database server restarts to be automated’. This isn’t really about delivering value to users, but a task that someone on the team needs, and such stories are often put into product backlogs because of misguided product owners who want to micromanage. For ideas on how to deal with these stories, see the chapter Don’t push everything into stories in the 50 Quick Ideas book.

Misleading stories describe a solution and not the real user need. One case we came across recently was ‘As a back-office operator, in order to run reports faster, I want the customer reporting database queries to be optimised’. At first glance, this seemed like a nice user story – it even included a potentially measurable change in someone’s behaviour. However, the speed of report execution is pretty much in the zone of control of the delivery team, which prompted us to investigate further. We discovered that the operator asking for the change was looking for discrepancies in customer information. He ran several different reports just to compare them manually. Because of the volume of data and the systems involved, he had to wait around for 20 to 30 minutes for the reports, and then spend another 10 to 20 minutes loading the different files into Excel and comparing them. We could probably have decreased the time needed for the first part of that job significantly, but the operator would still have had to spend time comparing information. Then we traced the request to something outside our zone of control. Running reports faster helped the operator to compare customer information, which helped him to identify discrepancies (still within our control potentially), and then to resolve them by calling the customers and cleaning up their data. Cleaning up customer data was outside our zone of control, we could just influence it by providing information quickly. This was a nice place to start discussing the story and its deliverables. We rephrased the story to ‘In order to resolve customer data discrepancies faster…’ and implemented a web page that quickly compared different data sources and almost instantly displayed only the differences. There was no need to run the lengthy reports, the database software was more than capable of zeroing in on the differences very quickly. The operator could then call the customers and verify the information.

When the deliverable of a story is outside the zone of control of the delivery team, there are two common situations: the expectation is completely unrealistic, or the story is not completely actionable by the delivery group. The first case is easy to deal with – just politely reject it. The second case is more interesting. Such stories might need the involvement of an external specialist, or a different part of the organisation. For example, one of our clients was a team in a large financial organisation where configuration changes to message formats had to be executed by a specialist central team. This, of course, took a lot of time and coordination. By doing the zone of control/sphere of influence triage on stories, we quickly identified those that were at risk of being delayed. The team started on them quickly, so that everything would be ready for the specialists as soon as possible.

How to make it work

The system boundaries vary depending on viewpoint, so consider them from the perspective of the delivery team.

If a story does not fit into the expected pattern, raise the alarm early and consider re-writing it. Throw out or replace fake and misleading stories. Micro-stories aren’t necessarily bad, but going so deep in detail is probably an overkill for anything apart from short-term plans. If you discover micro-stories on mid-term or long-term plans, it’s probably better to replace a whole group of related stories with one larger item.

If you discover stories that are only partially actionable by your team, consider splitting them into a part that is actionable by the delivery group, and a part that needs management intervention or coordination.

To take this approach even further, consider drawing up a Current reality tree (outside the scope of this post, but well explained in The Logical Thinking Process), which will help you further to identify the root causes of undesirable effects.

Categories: Blogs

But What do I Know?

Hiccupps - James Thomas - Fri, 09/12/2014 - 06:59


The novelty of hypertext over traditional text is the direct linking of references. This allows the reader to navigate immediately from one text to another, or to another part of the same text, or expose more detail of some aspect of that text in place. This kind of hyperlinking is now ubiquitous through the World Wide Web and most of us don't give it a second thought.

I was looking up hypermedia for the blog post I wanted to write today when I discovered that there's another meaning of the term hypertext in the study of semiotics and, further, that the term has a counterpart, hypotext. Thse two are defined in relation to one another, credited to Gérard Genette: "Hypertextuality refers to any relationship uniting a text B (which I shall call the hypertext) to an earlier text A (I shall, of course, call it the hypotext), upon which it is grafted in a manner that is not that of commentary."

In a somewhat meta diversion, following a path through the pages describing these terms realised a notion that I'd had floating around partially-formed for a while: quite apart from the convenience, an aspect of hypertext that I find particularly valuable is the potential for maintaining and developing the momentum of a thought by chasing it through a chain of references. I frequently find that this process and the speed of it, is itself a spur to further ideas and new connections. For example, when I'm stuck on a problem and searching hasn't got me to the answer, I will sometimes recourse to following links through sets of web pages in the area, guided by the sense that they might be applicable, by them appearing to be about stuff I am not familiar with, by my own interest, by my gut.

I don't imagine that I would have thought that just now had I not followed hyperlink to its alternative definition and then to hypolink and then made the connection from the links between pages to the chain of thoughts which parallels, or perhaps entwines, or maybe leaps off from them.

And that itself is pleasing because the thing I wanted to capture today grew from the act of clicking through links (I so wish that could be a single verb and at least one other person thinks so too: clinking anyone?). I started at Adam Knight's The Facebook Effect, clinked through to a Twitter thread  from which Adam obtained the image he used and then on to Overcoming Impostor Syndrome which contained the original image.

The image that unites these three is the one I'm using at the top here and what it solidified for me was the way that we can be inhibited from sharing information because we feel that everyone around us will already know it or will have remembered it because we know we told them it once before. I've seen it, done it and still do it myself in loads of contexts including circulating interesting links to the team, running our standups and reporting the results of investigations to colleagues.

As testers it can be particularly dangerous, not necessarily because of impostor or Facebook effects, but because we need to be aware that when we choose not to share, or acknowledge, or reacknowledge some significant issue with the thing we're testing we may be inadvertently hiding it (although context should guide the extent to which we need to temper the temptation to over-report and be accepting of others reminding us of existing information). It's one of the reasons I favour open notebook testing.

Note to self: I don't know what you know, you know?
Image: Overcoming Impostor Syndrome
Categories: Blogs

Free meet-up: Talking application performance engineering in New York at Velocity

HP LoadRunner and Performance Center Blog - Thu, 09/11/2014 - 20:52

Are you attending Velocity NYC next week? If you are (or are in New York during that time) I encourage you to take the opportunity to meet with us during the event.

 

We are hosting a performance engineering meet-up on Monday. Keep reading to find out how you can join us!

 

iStock_000012107860XSmall.jpg

 

 

Categories: Companies

There’s an App U for That: uTest in New England Journal of Higher Education

uTest - Thu, 09/11/2014 - 20:37

If you haven’t noticed, apps are kind of a big deal right now.little-u How big? To the tune of about 466,000 jobs from 2007 to 2012 being created by the apps economy, according to a TechNet survey.

It is also anticipated that employer demand will create 3.7 million new IT jobs by 2016. So it’s only natural, going hand-in-hand with this explosive job growth, that there is a need for workers with skill sets that will allow them to run the tech necessary to power this new app economy.

According to Applause/uTest Chief Marketing and Strategy Officer Matt Johnston, who sat down for an interview with the New England Journal of Higher Education, it’s also an “alternative” path that testers are taking to learn these in-demand skill sets:

“With a recent surge in employment thanks to the proliferation of IT jobs, many adults who are seeking to turn their careers around and want to participate in the apps economy are turning to alternative education paths—because going back to college will take too long for them to obtain a degree.”

uTest has been proud to have been a part of this alternative path with the launch of uTest University almost a year ago, designed to be a single source for testers of all experience levels to access free training courses. You can check out the full article right here with Matt’s interview, which gets into how testers and other IT workers are taking education into their own hands in this new economy, and how programs like uTest University and other massive open online courses (MOOCs) are leading the charge.

And you can also start your education right away — no expensive textbooks needed — over at uTest University, totally free to members of the uTest Community.

Categories: Companies

CAST Provides New Software Quality Certification Program

Software Testing Magazine - Thu, 09/11/2014 - 18:43
CAST has launched a Software Certification Program to provide organizations with standards-based verification of the quality of their critical systems. The new certification is a significant evolution in IT maturity, moving beyond the current industry model that only certifies software development processes, but not the output. CAST’s new certification program is the first of its kind to certify the output of software development, enabling companies to understand how their applications stand up against the most stringent software quality standards for reliability, performance, security and maintainability. The CAST Software Certification Program is based ...
Categories: Communities

5 tips to help you correlate Web HTTP scripts in HP LoadRunner

HP LoadRunner and Performance Center Blog - Thu, 09/11/2014 - 18:22

record scan.pngWhen you record Web/HTTP scripts using HP LoadRunner’s Vugen, you might find that the script won’t play back without modification. This is often due to values that apply specifically to the current session. Because Vugen is playing back values that were recorded when the script was originally recorded, these values are not valid when played back. To make the script play back correctly, these values must be replaced with values from the currently executing session. The process of identifying and replacing these values is known as correlation.

 

Continue reading to learn some tips that will make it easier for you to correlate values in your Vugen scripts.

 

(This post was written by Yang Luo (Kert) from the LoadRunner R&D Team)

Categories: Companies

Mobile Test Automation Webinar, September 25 2014

Software Testing Magazine - Thu, 09/11/2014 - 17:49
As Mobile Test Automation tools compete with each other with attractive features, you find it even harder to choose the right one for your enterprise, which best fits your requirements and offers the greatest ROI. On Thursday, 25th September, Vishnu Nallani of Gallop will present in a live interactive webinar the formula for Mobile Test Automation that ensures impeccable levels of QA. This webinar will help you identify criteria for selecting the right mobile test automation tools, performing the analysis of ROI for mobile test automation efforts, developing test automation scripts ...
Categories: Communities

Gartner Goes Development-Centric

Sonatype Blog - Thu, 09/11/2014 - 16:38
Recently, Gartner published a new research report that says by 2016, “the vast majority of mainstream IT organizations will leverage nontrivial elements of open source software (directly or indirectly) in mission- critical IT solutions. However, most will fail to effectively manage these assets in...

To read more, visit our blog at blog.sonatype.com.
Categories: Companies

How to Optimize the Good and Exclude the Bad/ Bot Traffic that Impacts your Web Analytics and Performance

This blog is about how a new generation of BOTs impacted our application performance, exploited problems in our deployment and skewed our web analytics. I explain how we dealt with it and what you can learn to protect your own systems. Another positive side-effect of identifying these requests is that we can adjust our web […]

The post How to Optimize the Good and Exclude the Bad/ Bot Traffic that Impacts your Web Analytics and Performance appeared first on Compuware APM Blog.

Categories: Companies

CloudBees Becomes the Enterprise Jenkins Company

Since we founded the company, back in 2010, CloudBees always had the vision to help enterprises accelerate the way they develop and deploy applications. To that end we delivered a PaaS that covered the entire application lifecycle, from development, continuous integration and deployment to staging and production. As part of this platform, Jenkins always played a prominent role. Based on popular demand for Jenkins CI, we quickly responded and also provided an on-premise Jenkins distribution, Jenkins Enterprise by CloudBees.
Initially, Jenkins Enterprise by CloudBees customers were mainly using Jenkins on-premise for CI workloads. But in the last two years, a growing number of customers have pursued an extensive Continuous Delivery strategy and Jenkins has moved from a developer-centric tool to a company-wide Continuous Delivery hub, orchestrating many of the key company IT assets.
For CloudBees, this shift has translated into a massive growth of our Jenkins Enterprise by CloudBees business and has forced us to reflect on how we see our future. Since a number of CloudBees employees, advisors and investors are ex-JBossians, we’ve had the chance to witness first-hand what a successful open source phenomenon is and how it can translate into a successful business model, while respecting its independence and further fueling its growth. Consequently, it quickly became obvious to us that we had to re-focus the company to become the Enterprise Jenkins Company, both on-premise and in the cloud, hence exit the runtime PaaS business (RUN@cloud & WEAVE@cloud). While this wasn’t a light-hearted decision (we are still PaaS lovers!), this is the right decision for the company.
With regard to our existing RUN@cloud customers, we’ve already reached out to each of them to make sure they’re being taken care of. We’ve published a detailed migration guide and have setup a migration task-force that will help them with any question related to the migration of their applications.  (Read our FAQ for RUN@cloud customers.) We’ve also worked with a number of third-party PaaS providers and will be able to perform introductions as needed. We’ve always claimed that our PaaS, based on open standards and open source (Tomcat, JBoss, MongoDB, MySQL, etc.) would not lock customers in, so we think those migrations should be relatively painless. In any case, we’ll do everything we can to make all customer transitions a success
From a Jenkins portfolio standpoint, refocusing the company means we will be able to significantly increase our engineering contribution to Jenkins, both in the open source community as well as in our enterprise products. Kohsuke Kawaguchi, founder of Jenkins and CTO at CloudBees, is also making sure that what we do as a company preserves the interest of the community.
Our Jenkins-based portfolio will fit a wide range of deployment scenarios:
  • Running Jenkins Enterprise by CloudBees within enterprises on native hardware or virtualized environments, thanks to our enterprise extensions (such as role-based access control, clustering, vSphere support, etc.)
  • Running Jenkins Enterprise by CloudBees on private and public cloud environments, making it possible for enterprises to leverage the elastic and self-service cloud attributes offered by those cloud layers. On that topic, see the Pivotal partnership we announced today. I also blogged about the new partnership here.
  • Consuming Jenkins as a service, fully managed for you by CloudBees in the public cloud, thanks to our DEV@cloud offering (soon to be renamed “CloudBees Jenkins as a Service”).

Furthermore, thanks to CloudBees Jenkins Operations Center, you’ll be able to run Jenkins Enterprise by CloudBees at scale on any mix of the above scenarios (native hardware, private cloud, public cloud and SaaS), all managed and monitored from a central point.
From a market standpoint, several powerful waves are re-shaping the IT landscape as we know it today: Continuous Delivery, Cloud and DevOps. A number of companies sit at the intersection of those forces: Amazon, Google, Chef, Puppet, Atlassian, Docker, CloudBees, etc. We think those companies are in a strategic position to become tomorrow’s leading IT vendors.
Onward,

Sacha

Additional Resources
Read the press release about our new Jenkins focus
Read our FAQ for RUN@cloud customers
Read Steve Harris's blog







Sacha Labourey is the CEO and founder of CloudBees.
Categories: Companies

CloudBees Partners with Pivotal

Today, Pivotal and CloudBees are announcing a strategic partnership, one that sits at the intersection of two very powerful waves that are re-shaping the IT landscape as we know it today: Cloud and Continuous Delivery.
Pivotal has been executing on an ambitious platform strategy that makes it possible for enterprises to benefit from a wide range of services within their existing datacenter: from Infrastructure as a Service  (IaaS) up to Platform as a Service (PaaS), as well as a very valuable service, Pivotal Network, that makes it trivial to deploy certified third-party solutions on your Pivotal private cloud. (To read Pivotal's view on the partnership, check out the blog authored by Nima Badiey, head of ecosystem partnerships and business development for Cloud Foundry.)
As such, our teams have been working closely on delivering a CloudBees Jenkins Enterprise solution specifically crafted for Pivotal CF. It will feature a unique user experience and will be leveraging Pivotal’s cloud layer to provide self-service and elasticity to CloudBees Jenkins Enterprise users. We expect our common solution to be available on Pivotal CF later this year, and we will be iteratively increasing the feature set.
Given Jenkins’ flexibility, Pivotal customers will be using our combined offering in a variety of ways but two leading scenarios are already emerging.
The first scenario is for Pivotal developers to use Jenkins to perform continuous integration and continuous delivery of applications deployed on top of the Pivotal CF PaaS. CloudBees Jenkins Enterprise provides an integration with the CloudFoundry PaaS API that makes the application deployment process very smooth and straightforward. This first scenario provides first class support for continuous delivery to Pivotal CF developers.
The second scenario focuses on enterprises relying on Jenkins for continuous integration and/or continuous delivery of existing (non-Pivotal CF-based) applications. Thanks to the Pivotal/CloudBees partnership, companies will ultimately be able to leverage the Pivotal cloud to benefit from elastic build capacity as well as the ability to provision more resources on-demand, in a self-service fashion.
The CloudBees team is very proud to partner with Pivotal and bring Pivotal users access to CloudBees Jenkins Enterprise, the leading continuous delivery solution.
Onward,
Sacha







Sacha Labourey is the CEO and founder of CloudBees.
Categories: Companies

Knowledge Sharing

Telerik Test Studio is all-in-one testing solution that makes software testing easy. SpiraTest is the most powerful and affordable test management solution on the market today