Skip to content

Feed aggregator

BlazeMeter Extends Open Source Load Testing

Software Testing Magazine - Wed, 03/30/2016 - 17:09
BlazeMeter has announced new functionality giving developers freedom to run any combination of Gatling, The Grinder, Locust, Selenium and JMeter tests in parallel through a single unified control language, both locally and in the cloud. “Developers that want to excel in the era of continuous delivery need tools that give them the most speed and flexibility possible,” said Andrey Pokhilko, Chief Scientist at BlazeMeter. “We’ve been working with the open source community to develop ideal performance testing tools for any scenario. The new functionality combines the best of our community work with the infrastructure we’ve built up at BlazeMeter, marking a new set of capabilities to bring performance testing to new heights.” BlazeMeter’s new technology allows teams to create load and performance tests as brief fragments of code in any text editor, removing many of the barriers presented by legacy tools. Much like the way Chef and Puppet allow the definition of “infrastructure as code,” BlazeMeter tests can be defined using familiar YAML or JSON file formats and then easily managed in version control alongside the applications they test. Parallel test runs shorten build/test cycles and make them more predictable. Adding more tests doesn’t make the testing cycle take longer – test suites run only as long as the longest single test among them. Parallel runs, even with widely different open source tools, allow speed and quality to co-exist.
Categories: Communities

Kovair Adds Testing Capabilities to LDRA Integration

Software Testing Magazine - Wed, 03/30/2016 - 16:53
Kovair Software, one of the leaders in Integrated Application Lifecycle Management (ALM) has announced the phase II release of its Omnibus Integration Adapter/Connector for LDRA Testbed, a leading product in providing core static and dynamic analysis engines for both host and embedded software. LDRA Testbed provides the means to enforce compliance with coding standards such as MISRA, JSF++ AV, CERT C, and CWE and also provides visibility of software flaws that might typically pass through the standard build and test processes to become latent problems. In addition, test effectiveness feedback is provided through structural coverage analysis reporting facilities which support the requirements of the DO-178B/C standard up to and including Level-A. Safety critical and security critical software development brings in a global array of components and technology providers. This makes managing the quality and strictness of maintaining different standards over different years of development contracts extremely challenging, particularly, in light of tight budgets and time-to market requirements. In the phase I release, Kovair Omnibus connector for LDRA Testbed enabled developers to easily apply LDRA’s static, dynamic and system level code analysis capabilities from within their IDEs. The phase I integration also allowed the senior quality managers to see test results from LDRA using the centralised repository of Kovair, and then create reports and dashboards for decision making. In this phase II release, Kovair Omnibus has extended its integration support for LDRA’s TBrun module for executing unit test (Black/White Box mode), integration/module test and isolation unit test. This will allow users [...]
Categories: Communities

BDDfire 2.0 Released

Software Testing Magazine - Wed, 03/30/2016 - 16:22
BDDfire in an open source tools that allows to setup Ruby Cucumber framework within a minute with all related toolkit. It became very popular as it gives us ability to setup Ruby-Cucumber framework without wasting time while setting up test framework for new projects. You don’t have to spend months and months setting up your frameworks and research the tools. BDDfire currently has more than 69,000 downloads. What’s new in BDDfire 2.0 BDDfire 2.0 provides Docker, Gatling and Accessibility support. It means that with BDDfire, you can perform load testing, accessibility testing and execute your scenarios inside the Docker containers. With BDDfire 2.0 you can * Setup instant Ruby-Cucumber Framework * Setup Docker environment for Cucumber * Setup Load and Performance scenarios with Gatling * Setup Accessibility Framework with Axe accessibility engine Visit https://rubygems.org/gems/bddfire
Categories: Communities

Analyzing out-of-memory exceptions caused by overly aggressive logging

As a performance architect I conduct many performance tests against Java Enterprise apps, analyzing performance, scalability and stability issues of the application under load. One of our recent tests was against an application that kept crashing with “Out of Memory Exceptions” whenever we ran our JMeter-based Load Tests. Typically, the first thought in such a case is usually: […]

The post Analyzing out-of-memory exceptions caused by overly aggressive logging appeared first on about:performance.

Categories: Companies

Learn More About Scaling Agile at Raleigh Tech Summit

The Seapine View - Wed, 03/30/2016 - 15:30

Look for Seapine at the conference.Would you like to learn more about scaling Agile for your organization? If you’re attending the free Raleigh Tech Summit on Wednesday, April 27, you’re in luck!

Most organizations no longer need to be convinced of the benefits gained by adopting Agile practices. However, many now struggle to balance how Agile teams work with the planning and management needs of the wider organization.

The trick is learning how to manage both sets of needs in the same tool. Join Scott Allen, professional services manager at Seapine Software, as he discusses how to achieve a real-time, end-to-end view of what is going on from the top to bottom of your organization. We’ll also be exhibiting, so be sure to stop by for a closer look at how to manage Agile development with TestTrack.

The Raleigh Tech Summit provides you with the latest IT trends and knowledge you need to make the best possible decisions on behalf of your organization. Registration is free, but seating is limited, so be sure to sign up as soon as possible:

http://raleighsummit.com/

Categories: Companies

Exploratory Testing Extension for Visual Studio

Testing TV - Wed, 03/30/2016 - 10:06
This presentation discusses about the new Chrome Exploratory Testing extension for Visual Studio The extension use various capture formats- notes, screenshots with annotations, user action logs, and videos. Test your applications on real devices using cloud providers like Perfecto, or test them on browser-based emulators. Video Producer: http://tv.ssw.com/
Categories: Blogs

Obscure Gradle Error: RSA premaster secret error

a little madness - Wed, 03/30/2016 - 05:30

Just a quick post in case anyone else runs into the same obscure scenario. Setting up a new Gradle project on my OSX dev machine, the build could not download any files from Maven Central. When trying to establish an SSH connection I was getting:

RSA premaster secret error

A web search didn’t turn up much, making it clear this was not a common issue. The only hits I found where outdated or unusual configurations, whereas I believed I had a pretty vanilla setup.

Long story short, the problem was I had globally set the java.ext.dirs system property to the empty string to prevent another, unrelated (and equally obscure) error in the past. That was too blunt an approach — the JVM at the very least needs $JAVA_HOME/jre/lib/ext to be included in java.ext.dirs to load some core jars, including sunjce_provider.jar which includes implementations of the encryption algorithms required to establish SSL connections. User error on my part, which I paid for with wasted time — I hope this post saves someone from the same mistake!

Categories: Companies

Important notice regarding usage statistics

A bug was introduced in Jenkins versions 1.645 and 1.642.2 which caused Jenkins to send anonymous usage statistics, even if the administrator opted-out of reporting usage data in the Jenkins web UI. If you are running one of the affected versions, the best/easiest solution is to upgrade. The bug does not affect Jenkins 1.653 or newer, or Jenkins LTS 1.642.4 or newer. If you cannot upgrade, it is possible to immediately disable submission of usage statistics by running the following script in "Manage Jenkins » Script Console": hudson.model.UsageStatistics.DISABLED = true This will immediately disable usage data submission until you restart Jenkins. To make this permanent, change your Jenkins...
Categories: Open Source

Appium 1.5.1 Released on Sauce

Sauce Labs - Tue, 03/29/2016 - 22:07

We are happy to support the newly-released Appium 1.5.1. This release fixes a
number of issues with 1.5.0, including one bug that prevented some frameworks
from correctly polling for status during Safari tests.

General

  • allow `platformName` to be any case
  • Windows process handling is cleaned up
  • Desired capabilities `language` and `locale` added

iOS

  • iOS 9.3 (Xcode 7.3) support
  • Fix handling of return values from `executeScript` in Safari
  • Don’t stop if Instruments doesn’t shut down in a timely manner
  • Escape single quotes in all methods that set the value on an element
  • Allow custom device names
  • Make full use of process arguments to Instruments
  • Pass `launchTimeout` to Instruments when checking devices

Android

  • Make use of `–bootstrap-port` server argument
  • Fix `keystorePassword` capability to allow a string
  • Fix handling of localization in Android 6
  • Use Appium’s unlock logic for Chrome sessions
  • Make sure reset works
  • Make unlock more reliable for later versions of Android
  • Allow Xpath searching from the context of another element
  • Make full use of process arguments to adb
  • Better error messages when ChromeDriver fails to start
Categories: Companies

Four key elements of unified monitoring in the digital world

Enterprise organisations face mounting pressure to become more customer centric, agile, scalable and responsive. Compounding this is the fact that IT environments, faced with digital transformation, are changing rapidly and growing more complex by the day. As a consequence, digital performance is a top priority for most, which means the performance monitoring space has become congested […]

The post Four key elements of unified monitoring in the digital world appeared first on about:performance.

Categories: Companies

Test Management Summit, London, UK, April 26 2016

Software Testing Magazine - Tue, 03/29/2016 - 10:00
The Test Management Summit is a one-day conference focused on software testing that takes place in London. In the agenda of the Test Management Summit conference you can find topics like “The future of Test Data Management”, “The Evolution of Test Governance and Assurance”, “Developing Excellent Technical Testers”, “Achieving Zero Bugs on the Test Environment”, “Learning How to Tell Our Testing Stories”, “Instructional Games for Testers, Test Leads & Test Managers”, “Techniques for Successful Program Test Management”, “Security – What can testers do now?”, “Putting Models at the Heart of Testing”, “Testing the Internet of Things: the Dark and the Light!”, “Rapid and Effective Team Building Techniques”, “Improving the effectiveness of reviews and Inspections”, “Software versus Project Test Management – The Great Divide?”, “Successful Test Automation for Managers”, “Making and Testing Internet of Things devices – The Pain and the Gain”. Web site: http://uktmf.com/ Location for Test Management Summit: Balls Brothers Conference Centre, Mincing Lane, London EC3R 7PP, United Kingdom
Categories: Communities

From QA to Engineering Productivity

Google Testing Blog - Tue, 03/29/2016 - 02:38
By Ari Shamash

In Google’s early days, a small handful of software engineers built, tested, and released software. But as the user-base grew and products proliferated, engineers started specializing in roles, creating more scale in the development process:

  • Test Engineers (TEs) --  tested new products and systems integration
  • Release Engineers (REs) --  pushed bits into production
  • Site Reliability Engineers (SREs) --  managed systems and data centers 24x7.

This story focuses on the evolution of quality assurance and the roles of the engineers behind it at Google.  The REs and SREs also evolved, but we’ll leave that for another day.

Initially, teams relied heavily on manual operations.  When we attempted to automate testing, we largely focused on the frontends, which worked, because Google was small and our products had fewer integrations.  However, as Google grew, longer and longer manual test cycles bogged down iterations and delayed feature launches.  Also, since we identified bugs later in the development cycle, it took us longer and longer to fix them.  We determined that pushing testing upstream via automation would help address these issues and accelerate velocity.

As manual testing transitioned to automated processes, two separate testing roles began to emerge at Google:

  • Test Engineers (TEs) -- With their deep product knowledge and test/quality domain expertise, TEs focused on what should be tested.
  • Software Engineers in Test (SETs) -- Originally software engineers with deep infrastructure and tooling expertise, SETs built the frameworks and packages required to implement automation.

The impact was significant:

  • Automated tests became more efficient and deterministic (e.g. by improving runtimes, eliminating sources of flakiness, etc.) 
  • Metrics driven engineering proliferated (e.g. improving code and feature coverage led to higher quality products).

Manual operations were reduced to manual verification on new features, and typically only in end-to-end, cross product integration boundaries.  TEs developed extreme depth of knowledge for the products they supported.  They became go-to engineers for product teams that needed expertise in test automation and integration. Their role evolved into a broad spectrum of responsibilities: writing scripts to automate testing, creating tools so developers could test their own code, and constantly designing better and more creative ways to identify weak spots and break software.

SETs (in collaboration with TEs and other engineers) built a wide array of test automation tools and developed best practices that were applicable across many products. Release velocity accelerated for products.  All was good, and there was much rejoicing!

SETs initially focused on building tools for reducing the testing cycle time, since that was the most manually intensive and time consuming phase of getting product code into production.  We made some of these tools available to the software development community: webdriver improvements, protractor, espresso, EarlGrey, martian proxy, karma, and GoogleTest. SETs were interested in sharing and collaborating with others in the industry and established conferences. The industry has also embraced the Test Engineering discipline, as other companies hired software engineers into similar roles, published articles, and drove Test-Driven Development into mainstream practices.

Through these efforts, the testing cycle time decreased dramatically, but interestingly the overall velocity did not increase proportionately, since other phases in the development cycle became the bottleneck.  SETs started building tools to accelerate all other aspects of product development, including:

  • Extending IDEs to make writing and reviewing code easier, shortening the “write code” cycle
  • Automating release verification, shortening the “release code” cycle.
  • Automating real time production system log verification and anomaly detection, helping automate production monitoring.
  • Automating measurement of developer productivity, helping understand what’s working and what isn’t.

In summary, the work done by the SETs naturally progressed from supporting only product testing efforts to include supporting product development efforts as well. Their role now encompassed a much broader Engineering Productivity agenda.

Given the expanded SET charter, we wanted the title of the role to reflect the work. But what should the new title be?  We empowered the SETs to choose a new title, and they overwhelmingly (91%) selected Software Engineer, Tools & Infrastructure (abbreviated to SETI).

Today, SETIs and TEs still collaborate very closely on optimizing the entire development life cycle with a goal of eliminating all friction from getting features into production. Interested in building next generation tools and infrastructure?  Join us (SETI, TE)!

Categories: Blogs

StormRunner Load Ice Climber 1.9 Release Announcement: Get the latest in cloud load testing

HP LoadRunner and Performance Center Blog - Mon, 03/28/2016 - 22:00

Spring image.PNG

The latest version of StormRunner Load- Ice Climber 1.9 has just been released. Just like the beautiful sight of blossoming tree and the fragrant smell of blooming flowers, the new and cool updates and features of StormRunner Load bring as much joy to all the senses of Agile software testers.

Keep reading to find more about the latest version.

Categories: Companies

Fragile Automation

Sauce Labs - Mon, 03/28/2016 - 15:00

User Interface (UI) Testing.

The idea is simple — automate some UI tests to ensure your application is still behaving as expected. Usually your first set of tests — running green, no doubt — make you all cheer and pat yourselves on the back. Then you open up the framework to more people. Despite the reviews (so many reviews), the failures start to come, and they don’t stop. Or they run green and then fail and then run green again. And then fail again. So why are they so unstable? Is it bad scripts? Environment issues? Sometimes you just don’t know, and you think you are going to lose your mind. Let’s take a look at some common and potential issues you may be facing.

Architecture, Environments and Settings

Is your infrastructure designed for stability? Are you using on-premise or cloud instances? What may have saved you a dollar upfront could cost you many more down the road – so your testing environment is important.

Understand if your tests require particular system settings. Tests failing because of unwanted server variables is a waste of everyone’s time. We found that out the hard way a long time ago. You may need to have some isolated tests that cannot be run on the same server so the majority of your other tests can pass. (Or maybe decide how important the test really is.)

Or let’s say your testing frameworks are stable, but what about the tools or libraries you are importing? Are you pinning your stack to a version of these tools or libraries? A new version can completely break everything.

Many times, teams are pressed to have tests run as fast as possible. The way to accomplish this is parallelization (read more on that here). That said, if your automation architecture design was not planned out to support it, you may find difficulties.

Back-end Stability

How confident are you in your back-end acceptance tests? Without a stable base, it is hard to be confident in the UI built on top of it. When you write end-to-end (E2E) workflow tests, you are hitting your back-end. If they aren’t passing, you can’t be sure if your UI E2E tests are providing value, or if they are being impacted by deeper issues. Consider using mock API testing as part of your UI testing strategy. This reduces some of the impact of back-end issues, and gives more flexibility to the QA team.

Timing Issues

Consider how complex your system under test is. How long does it take to build? Does your back-end get out of sync with the UI and perhaps have changes that may not be in the UI yet (or vice versa)? In fast-paced modern development these types of issues can come up more and more.

Also consider how you are setting up data. For example, let’s say you need to create a user, a student, an assignment, and a submission for that assignment. If your test is to grade that submission, but your setup hasn’t completed for some reason, your test will fail because it can’t find the submission.

Think about how the team can help. Can your developers and automation champion design and deploy solutions for mock testing? Can DevOps help deploy those solutions with configuration management automation? Collaborating with the broader team for both timing and backend issues will help everyone.

Knowing What to Test: Just Because You Can, Doesn’t Mean You Should

Seriously think about what you are testing through the UI, and why. The more tests you have, the more of a pain your test suite will be to keep running green, and maintain. You should make higher-level decisions about the goals of your end-to-end tests. It is unlikely that the goal is to test every possible scenario. (Testing critical workflows is much more likely.) No matter what, there’s a constant struggle of balance that you need to consider.

Try to focus your tests on UI components and business logic. For example, I just had a conversation with a developer — and, while I’m thrilled this person was thinking about testing — this person wanted to test how to cancel an item delete in two different ways. Rather than testing every combination, I recommended simplification, because i knew it would not have a critical workflow impact. In this case, I asked the developer, “Why not just focus on UI component/unit tests instead?”

Automation provides a ton of benefits. But in order to make sure those benefits are realized over a long period of time, some planning is required. Consider your business goals and prepare for the things that can commonly go wrong.

Ashley Hunsberger is a Quality Architect at Blackboard, Inc. and co-founder of Quality Element. She’s passionate about making an impact in education and loves coaching team members in product and client-focused quality practices.  Most recently, she has focused on test strategy implementation and training, development process efficiencies, and preaching Test Driven Development to anyone that will listen.  In her downtime, she loves to travel, read, quilt, hike, and spend time with her family.

Categories: Companies

.NET Developer Tips and Tricks

NCover - Code Coverage for .NET Developers - Mon, 03/28/2016 - 12:00

We all have our own tips and tricks to what makes good, clean code that leads to powerful applications. We wanted to take a moment and share a couple of insights from some .NET developers on how they make powerful products. What are your tips?

Jérémy Jeanson

ncover_mvp_jeremy_jeanson_twitterMicrosoft Integration MVP Jérémy Jeanson discovered computers very early in life when his father brought home a Lynx: a funny box that produced Bezier curves on a small screen. Intrigued by the machine, he tried the basics, discovered storage on magnetic tape and powerful computers with 64Kb RAM. During his studies, Jeremy adopted .NET technologies, and a new world opened to him.

His best technology tip? KISS (Keep It Simple, Stupid) and keep an open mind. He believes developers should always keep their options open, neglecting no technology. Jeremy says, “More mastery of a technology offers a better chance to give customers a simple and effective solution…You should not choose the solution you prefer, but one that is most suited.”

More advice and recent projects from Jeremy on his blog and @jeremyjeanson.

Gian Paolo Santopaolo

ncover_mvp_gian_paolo_santopaolo_twitterItalian Hardware Interaction Design and Development MVP Gian Paolo Santopaolo designs and develops NUI user experiences for multitouch devices, focusing primarily on PPI by Microsoft and Kinect. He researches and creates prototypes for tactile and gesture recognition solutions with particular attention to the interaction between the two. For over a decade he has been dealing with architecture, design and development of enterprise applications with extreme scalability requirements by implementing the latest technologies.

Gian Paolo has always had a knack for working in teams, so sharing his expertise in communities comes naturally. He believes exchanging ideas is the way to find solutions from which everyone can benefit. His best advice can be summed up in two words: sharing passion. Gian Paolo says, “Always share your expertise, because history teaches us that sharing knowledge always leads to a collective growth.”

See what Gian Paolo is sharing on Twitter @gsantopaolo and at his company, Software Lab.

The post .NET Developer Tips and Tricks appeared first on NCover.

Categories: Companies

.NET Developer Tips and Tricks

NCover - Code Coverage for .NET Developers - Mon, 03/28/2016 - 12:00

We all have our own tips and tricks to what makes good, clean code that leads to powerful applications. We wanted to take a moment and share a couple of insights from some .NET developers on how they make powerful products. What are your tips?

Jérémy Jeanson

ncover_mvp_jeremy_jeanson_twitterMicrosoft Integration MVP Jérémy Jeanson discovered computers very early in life when his father brought home a Lynx: a funny box that produced Bezier curves on a small screen. Intrigued by the machine, he tried the basics, discovered storage on magnetic tape and powerful computers with 64Kb RAM. During his studies, Jeremy adopted .NET technologies, and a new world opened to him.

His best technology tip? KISS (Keep It Simple, Stupid) and keep an open mind. He believes developers should always keep their options open, neglecting no technology. Jeremy says, “More mastery of a technology offers a better chance to give customers a simple and effective solution…You should not choose the solution you prefer, but one that is most suited.”

More advice and recent projects from Jeremy on his blog and @jeremyjeanson.

Gian Paolo Santopaolo

ncover_mvp_gian_paolo_santopaolo_twitterItalian Hardware Interaction Design and Development MVP Gian Paolo Santopaolo designs and develops NUI user experiences for multitouch devices, focusing primarily on PPI by Microsoft and Kinect. He researches and creates prototypes for tactile and gesture recognition solutions with particular attention to the interaction between the two. For over a decade he has been dealing with architecture, design and development of enterprise applications with extreme scalability requirements by implementing the latest technologies.

Gian Paolo has always had a knack for working in teams, so sharing his expertise in communities comes naturally. He believes exchanging ideas is the way to find solutions from which everyone can benefit. His best advice can be summed up in two words: sharing passion. Gian Paolo says, “Always share your expertise, because history teaches us that sharing knowledge always leads to a collective growth.”

See what Gian Paolo is sharing on Twitter @gsantopaolo and at his company, Software Lab.

The post .NET Developer Tips and Tricks appeared first on NCover.

Categories: Companies

Empty Customer Chairs – Illustrating the Absence of Customer Participation


Once upon a time, there was a company that said it was customer focused.  They used Agile methods to incrementally build software. At the end of an iteration, each team within the company would conduct a demo session.  The feedback from the demonstrations would be used to adapt the product toward what was deemed as customer value.  When the demo was investigated, it was learned that there were no actual customers or end-users in the demo. The question that may then be posed is if there are no customers in the demos, then what are the teams adapting too? 
What appears to be a challenge to some companies who say they are customer-focused or Agile, is how to successfully construct a functional demo.  The short answer is that customers or at least the end-users must attend the demo.  Of course this is more easily said than done.  The long answer is to establish a the Agile Customer Feedback Vision.  This vision is a strategy for identifying the right customers to attend, applying personas that represent the various customer groups, establishing feedback sessions throughout the work, and then motivating the customers to attend the feedback sessions. 
In the meantime, how do you highlight the problem of the missing customers?  Certainly those in the company understand that gaining customer feedback is important to the success of a product.  Even when providing companies with the mechanics of a customer feedback vision, customers are still found missing from the demos.  Why is that?  Maybe it's important to illustrate the obvious, that customers are indeed missing from the demos.    One way to illustrate the obvious to companies and their teams is by applying the Empty Customer Chairs technique.   The Empty Customer Chairs is a visual way to highlight the absence of customers at a demo of the product.  The technique is applied by having 3 chairs that represent the customer at a demonstration. If customers attend the demo, they fill the chairs.  If no customers attend the demo, then the chairs remain obviously empty.  If the demo is held virtually, then 3 virtual screens are designated to customers.  If no customers attend, then those 3 screens remain empty. 
It would be hoped that a company or team realizes the benefit of customer participation.   Until such time, this technique can help you illustrates the obvious lack of customer participation that may have the intent to motivate the filling of those seats. At the end of the day, it is all about delivering customer value and this is a technique that can help you highlight the importance of this value through the absence of the customer. 
Categories: Blogs

Remarks on Wikimedia Foundation recent events

Chris McMahon's Blog - Sun, 03/27/2016 - 21:45

If you pay attention to Wikipedia culture and the WMF, you may know that the Executive Director of the WMF, Lila Tretikov, has resigned amid some controversy.

It is an extraordinary story, especially since, given the nature of Wikipedia culture, so much information about events is publicly available. I'll point you to Molly White's "Wikimedia timeline of recent events" as an excellent synopsis of Ms. Tretikov's tenure as ED.  The thing that strikes me most about that timeline is the number of people who left, and the long tenure of each person who departed. On the same subject, Terry Chay's note published on Quora also addresses this.

My own tenure at WMF was just over three years, from 2012 to 2015. In that time Željko Filipin  and I built an exceptionally good browser test automation framework, which at the time I left WMF was in use in about twenty different WMF code repositories. My time at WMF was roughly evenly split between Ms. Tretikov as ED and under the previous ED, Sue Gardner.

There are two things about Wikipedia and WMF that I think are key to understanding the failures of communication and culture under Ms. Tretikov's leadership.

As background, understand that everyone in the Wikimedia movement, without exception, and sometimes to a degree approaching zealotry, is committed to the vision: "Imagine a world in which every single human being can freely share in the sum of all knowledge."  I still am committed to this myself. My time at WMF absolutely shaped how I see the world.

Given that, what is important to understand is that Wikipedia is essentially a conservative culture. The status quo is supremely important, and attempting to change the status quo is *always* met with resistance. Always. There is good reason for that: Wikipedians see themselves as protecting the world's knowledge, and changes to the current status are naturally perceived as a threat to the quality or even the existence of that knowledge.

The other thing important to understand is that many of the staff at WMF come from the Wikipedia movement or the FOSS movement. Many (not all) of the technical staff began working with Wikimedia/FOSS software in college or even in high school, and ended up employed by WMF without ever experiencing how software is made and managed elsewhere. Likewise many (not all) of the management staff were (and are) important figures in the Wikipedia movement, without much experience in other milieux.

In practice, when attempting to make a change to Wikimedia software or Wikipedia culture, the default answer is always "no". No, you can't use that programming language, that library, that design approach, that framework. No, you can't introduce that feature or that methodology. 

So a big part of the work for those working in this culture is persuasion. One is constantly justifying one's ideas and actions to both one's peers and to management, and to the community, in the face of constant skepticism.  Wikipedians talk about "consensus culture" but in practice consensus is actually more along the lines of "grudging acceptance".  Sue Gardner's most recent blog post explains this better than I ever could.

And because so many Wikipedians have such a dearth of experience of other tech culture, NIH (Not Invented Here) is rampant. It was difficult to introduce proven, reliable, well-known tools simply because they were *too* well-known; they aren't *Wikipedia* tools, they don't have *Wikipedia* support, there is limited knowledge of them within the culture.

The result of these forces is that significant feature releases tend to be fiascos, but each fiasco of a somewhat different character. When WMF released the Visual Editor, the software was not fit for use, everyone involved knew it was not fit for use (or should have, they were certainly told), and the community rejected it for good reason. On the other hand, the Media Viewer *was* fit for use when it was released, but it was such a new paradigm that the community rejected it even more decisively than they had the Visual Editor. We could even speculate that had Media Viewer been as unusable upon release as the Visual Editor was, it might have received a kinder reception from the Wikipedia community.

Some notable exceptions to the fiasco release pattern were the Mobile Web work; the Mobile Web team did a great job and demonstrably made Wikipedia better, even if often over the occasional objections of their peers on the technical staff.  And the rollout of HHVM went well, as did the introduction of ElasticSearch, but none of these projects faced the Wikipedia old guard directly.

It also is notable that it took Željko and me three years to get our work accepted widely across all of WMF. Today I am building essentially the same system for Salesforce.org (the philanthropic entity attached to Salesforce.com) as Željko and I did for WMF. I expect to have my Salesforce.org project in the same position as the WMF project in one year, because I don't face the constant hurdle of having to persuade and persuade and persuade. Again, this is not necessarily a Bad Thing: the institutional skepticism and constant jockeying for acceptance of ideas, tools, and practices at WMF is a mechanism that protects the core mission of Wikipedia, even if it often makes the culture psychologically trying if not outright poisonous. You could argue that having to justify beforehand and evangelize afterward every step we took made the system that Željko and I built better than it would otherwise have been. If I seem to have a low opinion of the WMF understand that in my time at WMF I did some of the best work I've ever done, and I consider my time there to be the pinnacle of my career so far.

So it is perfectly understandable that Ms. Tretikov as Executive Director would want to launch an ambitious skunkworks project in secret. This is something CEOs do. CEOs have discretion over the budget, and they are responsible to shareholders for profits. But the Executive Director of the WMF cannot expect to hide a quarter-million dollar project engaging entities beyond Wikipedia without dire consequences, which is exactly what happened. Or was at least the final act in a long series of poorly executed maneuvers that alienated staff and community to the point of near-paralysis, and that caused a monumental loss of faith from the community as well as a huge loss of institutional knowledge as so many experienced staff abandoned the Foundation, or were abandoned by the Foundation.

I imagine that WMF and the Wikipedia movement will toddle on much as they always have. The Wikipedia vision of free knowledge for every human being remains compelling. And I hope that this troublesome period in the history of WMF can serve as a lesson not only to the Wikipedia community, but to the rest of us concerned with how best to make software work for our world.



Categories: Blogs

Two topics best avoided in retrospectives

thekua.com@work - Sun, 03/27/2016 - 16:51

When I introduce people to retrospectives I often am asked what topics should be covered and not covered as part of this. When I have asked this question of other people, I hear answers like “Everything should be open for discussion” or “No topic is ever taboo.”

Although I agree with the sentiment, I strongly disagree with the statements.

Photo from coltera's Flickr stream under the Creative Commons licencePhoto from coltera’s flickr stream under the Creative Commons licence

Yes, being able to discuss most topics openly as a team, particularly where there are different views is a sign of a healthy, collaborative team. Even then, I still believe there are two topics that teams should watch out for because I feel they do more harm than good.

1. Interpersonal conflict

Imagine yourself working with a team where two people suddenly start shouting at each other. You hear voices continue to escalate, maybe even watching someone storm off. An uncomfortable silence descends in the open team space as no one is quite sure how to react. Is this something you discuss as a team in the next retrospective?

Perhaps if the issue involves the entire team. When it involves two people where tension escalated too quickly over a single topic, it is more likely you need mediation and a facilitated conversation. A person wearing a leadership role (e.g. Project Manager, Line Manager, or Tech Lead) may be an ideal person with context to do that, but it may also be better to find a mediator who can get to each person’s interests and to find a way to both move forward and to start healing the relationship.

Although it will be impossible to ignore the topic in a retrospective, the focus should be on team expectations about behaviour, or identify ways the team can support each other better. It is unlikely you will solve, as a group, the conflict between the two people without making each of them very uncomfortable and unsafe.

behavioural issues for a single person

Just as you wouldn’t isolate two people and make the entire retrospective about them, teams must not “gang up” on a single person unless they feel safe to discuss the topic. If the entire team complains about what a single person is doing, the person is likely to feel targeted, isolated and potentially humiliated in front of their peers.

It may still be important to highlight issues impacting the entire team, but be careful that a retrospective does not become a witchhunt.

Where repeated, consistent behaviour needs to be addressed, a better solution is targeted one-to-one feedback.

Conclusion

Retrospectives are important practices for agile teams, but it is not a tool that will solve all problems. Retrospectives work well with other tools that offer better, more focused conversations for smaller groups such as mediation and one-to-one feedback.

What topics do you think should be avoided in retrospectives? Please leave a comment with your thoughts.

Categories: Blogs

StormRuner Load Ice Climber 1.9 Release Announcement: Get the latest in Cloud testing

HP LoadRunner and Performance Center Blog - Fri, 03/25/2016 - 23:38

Spring image.PNG

The latest version of StormRunner Load- Ice Climber 1.9 has just been released. Just like the beautiful sight of blossoming tree and the fragrant smell of blooming flowers, the new and cool updates and features of StormRunner Load bring as much joy to all the senses of Agile software testers.

Keep reading to find more about the latest version.

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today