It’s that time of year again, when thousands of people converge on Austin, TX for knowledge sharing, esteemed speakers, barbecue & tacos, and to have an amazing time every night in a traveling karaoke bar. More on that last bit in a minute.
Yes, it’s SXSW time, with interactive week starting in earnest tomorrow. And while this year’s lineup looks amazing, it’s the extra-curricular activities that make the festival the extravaganza that it is, and this year is no different. Well, with perhaps one exception…
This year at SXSW is extra special, as we’ll be unveiling a sneak peek at our new brand, which will be called Applause. Since late last year when we first announced that we’d be expanding the company vision and changing its name, everyone has been hard at work putting those wheels in motion. Now, at SXSW, you’ll be able to get a temporary, in-progress preview at the new branding and talk to the team about our focus on 360 degree app quality.
So where at SXSW Interactive can you see the new Applause? I’m glad you asked.
First, we’ll be traveling around nightly on the luxurious “transportainment” provided by the folks at the RVIP Lounge. Starting March 7th, hop on and enjoy traveling karaoke with our team – follow us on Twitter at @applause to find our locations on-the-go every evening.
And that’s not the only reason to follow @applause. During the daylight hours follow to find out where in Austin you can get your special one-time-only Applause gear (created for this year’s festival) from our street team.
The team will be tweeting out their locations, which is important for two reasons:
1. You can find our team and get this one-time-only gear
2. If our team sees you later and you’re wearing that gear you could win one of our great prizes:
- Jambox Mini by Jawbone
- Beats by Dre headphone
- Kindle Fire HD
Lastly, show us your love for the new Applause gear by tweeting a pic of you in it to @applause. One lucky tweeter will win a $500 Amazon gift card!
So if you’re headed to SXSW Interactive, be sure to follow @applause, as it will lead to evenings of wonder, give you a sneak peek at the new branding and possibly bring you great fortune! We’ll see you in Austin!
Learn all about best practices and strategies for testing apps behind a firewall and how to use Sauce Connect during our next online workshop! Join speaker Mike Redman, Director of Sales Engineering at Sauce, on Tuesday, March 11, 2014, at 11:00 AM Pacific Time for the latest.
Whether you’re at the enterprise or startup level, security is a hot topic. That’s why we created Sauce Connect. Sauce Connect creates a secure tunnel between your firewalled app and the Sauce cloud so you can run your tests knowing that your data is encrypted through industry standard TLS.
Keeping our security standards in mind, we completely rewrote the app. With the launch of our latest version, Sauce Connect 4, your tests will now run faster than ever, even with heavy loads. It’s better performing, more reliable, and gives broader support for a wider range of web standards, including Websockets.
Mike will walk you through testing behind a firewall and how to use Sauce Connect 4. A live Q&A session will follow. Register today!
Want to get any information about working at Ranorex or entering into co-operation agreements with us?
Come to our booth and get to know about the company, the software and our various job profiles or co-operation possibilities at Ranorex!
We look forward to meeting qualified applicants and partners interested in exploring joint opportunities!
Today marks the fifth anniversary of the Software Craftsmanship manifesto. Doug Bradbury asked me the following question:
Do you think that the bar of professionalism has been raised in the 5 years since the Software Craftsmanship Manifesto was published? Why or why not?
My short answer is “yes” – and “no”. Being around since the early days back in November 2008 when I joined the Software Craftsmanship mailing list, and having been involved in the different thoughts on the Ethics of Software Craftsmanship, my longer answer hides in this blog entry.Where we raised the bar
Overall, I think we raised the bar to some extents. Here is a brief list of stuff I see attached to the Software Craftsmanship movement.
There are lots of conferences where we share our work. I was lucky enough to attend the first Software Craftsmanship conference in 2009 in London. It was a blast. Gladly, we didn’t stop there. Since then other conferences popped up, like the Software Craftsmanship conference North America, and the German Software Craftsmanship and Testing conference (SoCraTes), that also start to spread to the UK recently. It’s a very good thing that we keep on conferring, exchanging our thoughts, and sharing what we know to other peers in the field. Clearly a value that we put forth from the manifesto.
There are also a couple of books available on the topic. Uncle Bob Martin’s Clean Code and the lesser known Clean Coder to start with. Dave Hoover’s and Ade Oshineye’s Apprenticeship Patterns are lesser known. So is Sandro Mancuso’s Software Craftsmanship book. (I am guilty, I haven’t read it, yet, as well.) Not to forget Emily Bache’s excellent Coding Dojo Handbook. These books collect the Zeitgeist of our movement today. I hope to see more books coming up in the next years.
Then there are events like Code Retreats and Coding Dojos. Corey Haines popularized the former one; for the Dojos there are several people around that started them. These help persons of the craft of software development to learn more about coding practices in the 21st century. For example, I remember a code retreat where we had a couple of students attending. At the end of the day they thanked us all since they learned so much on this one day that their professor couldn’t have taught them in years of study at the university. Then there was this other guy that claimed that he would be looking for a new job on Monday – and he did. These events make people of the craft aware about more effective (and efficient) ways to program in this century, and how to overcome sacred beliefs about coding habits.
On a side note, I think that the same goes for the larger testing community out there. As I see it, the context-driven testing community is very close to both, the Agile and the Software Craftsmanship community. I get a lot out of all of the three communities, and I think that we could do better if we managed to join forces.
Last, I think the biggest impact the Software Craftsmanship movement came out of the very first SoCraTes conference. We got together, and thought “this can’t be it for another whole year; we need to maintain this momentum”. We created the German Softwerkskammer to spread the word about Software Craftsmanship. One year later, we found out, that we had started ten local user groups with this. Each group met between once per quarter up to several times per month. They shared their knowledge among each other. They helped convince the larger world of software developers out there, to do create well-crafted software by steadily adding value, and nurturing a community of professionals that understand how to create productive partnerships with their customers.Where we lowered the bar
There are also things that trouble me, and I think we can do a bit better than that.
Early on, there was the Wandering book. I think it floated around between various craftspersons quite a lot. I think I was number 42 on the list when I signed up, and it took a year or so until the book made it to my hands. Unfortunately that book is – so is the revival book that I started about a year ago in Germany. It’s a pity, to some extent since if we can’t value our treasures, our words of wisdom to the extent that we put it aside, and forget to share our wisdom with the future generations of craftspersons sent out. If we do that with our words of wisdom so tragically, what do we expect our code bases to look like?
Recently Uncle Bob Martin had an answer to that question – and the community is heavily discussing it. The idea is the one of a dedicated foreman that has the right to reject certain changes to the version control system. I think – like all rules – we shouldn’t try to apply that rule without unthinking faith. Personally, I think we should answer the question “how does that help to advance the craft?” before we implement that foreman.
Then there is another thing that makes me worry. I have worked with a couple of companies in the past few years. In that, I saw a pattern emerging. There is this group of software craftsperson that think they are the elite. So they form their own central core team where the best code of all time is produced. This is pattern that is clearly not in line with the manifesto as I interpret it. As Software Craftspersons we should be able to share our tales, share our stories, share our practices. Creating an elite team that thinks they build the best code in the company is creating an artificial barrier that will prevent other folks from joining the elite club. It shuts down the sharing aspect that we held so dearly in the manifesto. Learn how to get along with your colleagues is the way to go.How to move on?
As I see it, there are good aspects of software craftsmanship around – and there are bad ones. I would be surprised if craftsmanship was a silver bullet after all. I am glad that we could make more people aware of coding practices they never learned in the 19th century in university, and we are starting to reach out. From my perspective it’s still the tip of the iceberg, and if we don’t learn how to overcome some of our elitist thoughts, we are probably ending our history after five years. Let’s avoid that, and keep on fighting crappy code that we created.
So since Part 1, here’s what’s new.
We have HTTP API versioning with Nancy and an overview of what is new in WebAPI 2.
As usual, I include a little bit of ElasticSearch.
Enjoy!ASP.NET & WebAPI
A mistake in the system that disburses payments for a housing benefit program in Amsterdam led to a €188 million payout to around 10,000 households in the city, giving each 100 times their annual sum. A recent investigation tied the cause of the error to an oversight in the software's implementation. While the city has managed to recoup the majority of the money, millions of euros have still not been recovered, and investigating and fixing the issue has already cost around €300,000, the Dutch News reported.
News of the error first broke in December, and the city quickly launched an investigation to find out both what went wrong and how it might go about recovering its money. Research firm KPMG was able to determine that the error was caused by the fact that the software used to handle the payments is based in cents rather than euros, the Dutch News reported. Additionally, no one caught the error as it occurred, leading to the €188 million payout instead of the intended €1.8 million. the highest individual payment was €34,000. Of the total, all but €2.4 million has been recovered, but experts have suggested that at least half of the remaining amount will be extremely difficult or impossible to recover.
Adding to the problem of the incorrect payments was the fact that there were no warning mechanisms in place. The investigation cited human error as well as a software mishap for letting the problem occur, and right way city officials decried the lack of automated safeguards.
""How can it be that no alarms went off?" city alderman Pieter Hilhorst said, according to the Amsterdam Herald. "It seems we're able to pay out €188 million without realizing it."
The incident underscores both the cost of a simple software error or of not implementing automated common sense safeguards, and it can serve as a reminder to developers of the value of using tools like static analysis software to catch potential errors during the production process. With rigorous source code analysis and approaches like peer code review, vendors can design products that are less likely to allow expensive mistakes.
Software news brought to you by Klocwork Inc., dedicated to helping software developers create better code with every keystroke.
Transaction Breakdown and Web Page Breakdown are very common concepts in LoadRunner and load testing in general when working with transport protocols. These metrics provide important information on network timing in order to analyze network layer bottlenecks on the server side when under load.
Keep reading to learn more about TruClient and the Client Side Breakdown metrics, which let you drill down into the Client Time.
What can the financial services industry learn from the U.S. Department of Homeland Security? In this third segment of my blog series on open source component security as it relates to the recently updated Financial Services Information Sharing and Analysis Center (FS-ISAC) guidelines, I explore the need for speed: humans vs. machines.
One mantra of the Homeland Security Agency is – “if you see something, say something” – which works on a human level to keep us safe. This same mantra has been used across the open source community to keep components secure, by identifying vulnerabilities and sharing that knowledge through public channels like the Common Vulnerabilities and Exposures (CVE) database. But it is now time we recognize that “if you see something, say something” only works for open source at human speed.
Just as the U.S. Department of Homeland Security relies on electronic surveillance to keep citizens safe, the open source software community also needs to embrace this approach. To properly ensure open source is secure, we need to work at machine speed.
We have longed surpassed a tipping point in open source development and usage of open source components. Not only do custom applications rely heavily on such components, so do the open source components themselves. Let’s take the Java developer community as an example.
- There are an estimated 10 million Java developers now worldwide.
- Java developers initiated over 13 billion requests of open source components last year from the Central repository.
- The average component depends on 5 other components (which depend on a bunch of their own and so).
Looking at the Maven ecosystem and traffic associated with the Central Repository, you can readily see years of exponential growth of open source component downloads. The same patterns appear with RubyGems, NPM and other major open source ecosystems. We have entered an era of massive and highly effective component re-use, where everyone can, paraphrasing Einstein, stand on the shoulders of open source giants.
Recent research also shows that 64 million vulnerable Java open source components were downloaded in 2013. While developers can rely on the Common Vulnerabilities and Exposures (CVE) database, manual review of this database for every component is simply not feasible if an organization wants to release its software on time.
Making the challenge more considerable, organizations also have trouble keeping track of which components, including the specific versions, are used in which applications. And further amplifying the concern, you need to consider the component dependencies (five on average, but can range to hundreds).
I would argue that a manual approach today is truly impossible given:
- the volume of components used
- the complexity of each component
- the cadence with which new vulnerabilities and new component versions are announced
The better approach is to have humans establish risk thresholds and supporting policies. To have machines automate and enforce those policies. And to have humans manage the inevitable exceptions.That is, we need to complement human speed approaches with machine speed capabilities.
Whether organizations use software-based technologies and data services from Sonatype or other vendors, we are now well beyond maintaining open source security at human speed. Would you agree?
When we start developing regression test suites for our codebase, we encounter issues such as code duplication, repetition of setup code, searching for required section of code from actual test code, managing different layers of tests such as field level, section level, page level and application level test code. A good test automation framework is built keeping the above design issues in mind.
In this blog, we will cover why we should use Page Object Pattern, a design pattern in a test automation framework. We will then create a simple test suite in Ruby to implement this pattern and try executing it on Saucelabs.The Page Object Pattern Defined:
Every test code that you write will comprise of several components – first the code needed to navigate through and interact with the elements of the page that you want to test. Then, there is the actual test code itself that reviews the content of the page and verifies if the actual results seen on the page matches with the expected results.
The architecture of this framework comprises of the following building blocks:
- Page Elements
SeleniumTestThis class requires all the basic methods to start the environment for testing.This class has intializer in which it is possible to specify the Driver TypeIt has method to create browser instance, launch a specific URL in that instance, it can establish connection with database if required, and can switch between frames and windows.
This class can also refresh a current URL.
- This class needs to interact with the pages components and get the task completed.
- This class can change the pages in the browser or can also bring back the previous status of the pages in the browser.
- It also interacts with database for various inputs required in pages. And this class can also switch frames and windows.
- This class is required by both the above class to perform changes. This is to represent the page which is used by both Action and Test class.
- This class is used to facilitate locating a specific element on a page, one of the ways for which could be to use browser start index and for that it has to know the browser being used.
- This class is required to identify elements on the pages, which are used by actions and tests. This class will identify elements based on id or css or xpath provided by the user and return it to actions and tests.
- This class can also click a specific element and can wait for some element to appear on the page.
The advantage of using such a framework is the clear segregation of the functions across the various classes. For instance, now our Test classes will only have to verify and assert test conditions. While the Action classes will handle the required action steps to be taken on the page such as retrieving values or inserting / updating values to change the current state of the system. And the Page classes will manage the interactions of the elements within it with the help of Action or Test classes. This framework also offers the advantages of reusability and maintainability of the test codebase in the long run.
(In our further blogs in this series, we will look at implementing a Simple Test Framework on Ruby using Page Object Pattern. We will also write a few simple test cases.)
So I’m not doing this often but I saw a lot of content this morning so instead of pushing one huge list at the end of the day, I thought about pushing it in an earlier instalment.
It will give you some time to read during lunch maybe. So we get little ASP.NET per se but we got some very interestiung OAuth and OWIN. I decided to repost a few of them since they were posted days apart.
From the kingpin of Backbone.js (Derick Baily) come his “lesson learned” from his ad about his book.
Finally, a feature comparison smackdown between Solr and ElasticSearch.
Enjoy!.NET & ASP.NET
7 Things I Learned From 175,000 Eyes And A Failed Ad | ThoughtStream.new :derick_bailey (lostechies.com) – This guy is basically a Backbone.js master. Checkout his book if you are interested.ElasticSearch
Think twice before trusting us with your personal information…said no 21st century business ever. Whether it’s the swipe of a card at a local convenience store, or that social media app you always find yourself on, using software that could potentially compromise your information is the norm, not the exception.
We’d go insane if we worried about every single transaction that could lead to identity theft or a depleted bank account. So instead, we put our trust in the technical leadership of brands to avoid these disasters on our behalf. Most of the time, there’s nothing to worry about. Most of the time.
Mt.Gox, the world’s largest Bitcoin (digital currency) exchange, recently lost track of 740,000 Bitcoins, resulting in a projected $350 million dollar loss after hackers allegedly planted a bug into the system. Here’s the scoop:
“In its announcement on Monday, Mt. Gox said that a bug in the Bitcoin software made it possible for someone to use the Bitcoin network to alter transaction details to make it appear that a Bitcoin transfer had not taken place when, in fact, it had.”
Mt.Gox reportedly handled about 80% of the world digital currency! Trading and withdrawals were halted, and users returned to a blank page on their website, and the “cryptocurrency” industry is now dealing with a major blow to its validity. There are lessons to be learned from this heist into the Bitcoin network, both for software developers and for consumers alike. Here are four, in no particular order:
Lesson 1: If a system can be hacked, it will be hacked. Someone will always try to get their hands on valuable information. Whether it’s the stealing of credit card numbers directly, or the selling of emails and passwords on the internet, criminal hacking is a business – a very big business in fact. So stealing Bitcoins (a currency stored in virtual wallets and not backed by any country’s currency) and exchanging them for another currency? An internet thief’s dream come true. The same is true for any company really: If there is sensitive data to be had, it’s only a matter of time before someone goes looking for it.
Lesson 2: Security is a never-ending battle. In fact, it’s an arms race. Do you think your security software is impermeable? Good. But it won’t be for long. For software to be secure, it has to be dynamic and ever-evolving. Just as the software is improving, so too are the hackers. But they can’t beat you at your own game if you keep changing the rules.
Lesson 3: Response matters. Don’t leave your users in the dark. Users found out the hard way that their accounts were gone when Mt.Gox trading was suspended and a few hours later they went to the website to find it returning a blank page. Posts were removed from the Mt.Gox Twitter feed. Users were unsure if they would be reimbursed. No official statement had been released about the Bitcoin heist until several days after the fact. Some speculate that lost Bitcoins went undetected for years. Whether that’s true or not has yet to be determined, but we can say that the longer a company takes to address the problem, the more rumors circulate and the quicker trust evaporates.
Lesson 4: Don’t get fooled again. There’s no excuse for letting the same security breach happen twice. Granted, fixing this particular bug won’t help these users get their money back, but if a business experiences a breach – and it’s not enough to take down the entire operation – then their users can be confident knowing their data is secure going forward. A security breach isn’t the end of the world in most cases, but if the same bug happens twice, it might be the end of your business.
What other advice would you offer to prevent a heist like this from happening? Do you think it was mismanagement or inevitable? Be sure to let us know in the comment section. And don’t let your company fall apart at the drop of a hack.
This news comes on the heels of CloudBees being positioned by Gartner in the “Visionaries” quadrant of the newly published Magic Quadrant for Enterprise aPaaS and our recent partnership announcement with Verizon Cloud. Needless to say this is a great time for CloudBees!
2013 has been a very important year for CloudBees. Continuous Delivery is radically re-shaping the way enterprises deliver value to the business by accelerating the way applications are built and deployed. CloudBees holds a strategic position at the core of this phenomenon and has been going through tremendous growth, both on-premise and in the public cloud, thanks to our innovative Jenkins CI and PaaS-based solutions.
I'd like to take this opportunity to share my pride for the amazing work that has been achieved by our team and congratulate them all: working in an environment where the overall good of the company comes before individual egos and performance is very powerful. And humbling.
In 2014, we obviously aim to drive continued sales growth and product expansion, but we will also be announcing more partnerships aimed at bringing the power of Continuous Delivery to more developers, more solutions and more businesses around the globe.
Sacha Labourey is the former CTO of JBoss, Inc. He was also co-general manager of middleware after the acquisition of JBoss by Red Hat. He ultimately left Red Hat in April 2009 and founded CloudBees in April 2010.
Follow Sacha on Twitter.
Toyota is recalling all of the 1.9 million newest generation Priuses worldwide due to a software error, the Japanese automaker announced recently. The recall affects model year 2010 through 2014 Priuses. It marks a shift in Toyota's public approach to automotive safety and underscores the increasingly complex nature of onboard vehicle software systems, experts noted.
The problem that occurs in the latest generation of Priuses is tied to software settings that could damage transistors in the hybrid systems, causing them to overheat, the company stated. The error could set off warning lights and in some cases even cause the system to shut down while the car is being driven, prompting the vehicle to stall. The manufacturer said no accidents or injuries from the error have been reported. Owners can take the car to a dealer to fix the issue.
Of the customers affected, around one million are in Japan, approximately 130,000 are in Europe and the other 713,000 are in the United States, where the Prius is the most successful alternative engine car on the market and one of the most popular passenger vehicles in general. Last year, Toyota sold more than 234,000 Priuses in the U.S., and it is the most sold vehicle of any type in California, the Los Angeles Times noted.
A changing automotive landscape
In one sense, the recall is notable because it marks a shift in Toyota's approach toward handling safety issues publicly, experts noted. The company has paid billions of dollars in fines and legal fees in the United States to handle fallout from recalls in 2009 and 2010. While safety recalls were not traditionally publicly acknowledged in Japan, where legal risks from owners are lower, the company has adjusted its approach and become more proactive in initiating recalls in recent years.
The recall is also important, though, because it highlights the growing complexity of today's vehicle software systems, analysts told The New York Times. With millions of lines of code governing cars' electronic systems, the potential for error is increased.
"Cars are getting more complicated," Jack R. Nerad, the executive editorial director at Kelley Blue Book, told The New York Times. "Twenty years ago, we weren't having software glitches."
As automakers seek to avoid widespread recalls of millions of vehicles like the one affecting Priuses, they can look to strengthen their software development process with approaches like source code analysis. With static analysis software, it's possible for developers to ensure they're meeting MISRA compliance standards and catch errors that could manifest as recall-worthy flaws later on.
Software news brought to you by Klocwork Inc., dedicated to helping software developers create better code with every keystroke.
On March 11, Martin Kochloefl, Software Solutions Consultant for Seapine Software Europe, will be at REConf 2014 in Munich, Germany, to present his talk, “Bridging Gaps with End-to-End Traceability.” Kochloefl will discuss the ways in which teams can meet quality, cost, compliance, and schedule constraints by using an automated product development solution that provides end-to-end traceability.
Documenting and sharing requirements and changes among product development team members can be complex and costly when traditional, manual methods are used. Martin will show how an integrated product development solution can automatically manage complex relationships and artifacts, giving all team members the clarity and visibility they need to drive business results.
If you will be at REConf 2014, be sure to attend Martin’s presentation on March 11 at 1:10 in Room Ammersee 2.Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon
Orange Business chooses Kalistick’s Test Booster solution to increase the effectiveness of PIDI validation tests and obtain the quality expected in production. PIDI is a critical application: it manages the 30,000 daily interventions undertaken to ensure quality broadband service.
Orange Business Services, it is:
* More than 2 million business customers
* 3,000 multinational customers
* Nearly € 7.2 billion turnover in 2012
“PIDI, a critical application for Orange Business.”
“Thanks to PIDI, more than 30,000 interventions are performed every day on the French public network by the 6,000 technicians working for Orange Business Partners (Eiffage, Scopelec, etc.). This application is very sensitive: indeed, missing a single customer intervention request, leading to an information system being out of service for several days, is out of the question,” Fabrice Varo, Project Manager – PIDI Application, explains.
If the service is unavailable, it immediately impacts the smooth operation not only of current interventions in the field, but also of all future interventions; completely paralyzing the work of technicians.
“Kalistick perfectly complements HP Quality Center”
“In order to ensure impeccable service quality to our customers and in view of the technical complexity of the PIDI application, we carry out extensive tests to ensure that no functionality has been impacted by recent developments. The major problem we face today: despite test durations being around 4 weeks, regressions remain always difficult to detect,” he adds.
For these reasons, Orange Business chose Kalistick’s Test Booster solution which perfectly complements the HP Quality Center solution to accurately identify the software tests which should be rerun for each version, as well as the risk areas poorly covered by tests, all the while significantly reducing validation time.
“Test Effectiveness: Looking good!”
“We’re already taking advantage of the Test Booster tool to validate the new version of the PIDI application. If it is still too early to communicate the results, we are confident that the Kalistick solution will allow us to deliver high quality releases in shorter time frames and at lower costs,” Fabrice Varo confirms.