Skip to content

Feed aggregator

“Workflow” Means Different Things to Different People

Wikipedia defines the term workflow as “an orchestrated and repeatable pattern of business activity enabled by the systematic organization of resources into processes” - processes that make things or just generally get work done. Manufacturers can thank workflows for revolutionizing the production of everything from cars to chocolate bars. Management wonks have built careers on applying workflow improvement theories like Lean and TQM to their business processes.

What does workflow mean to the people who create software? Years ago, probably not much. While this is a field where there’s plenty of complicated work to move along a conceptual assembly line, the actual process of building software historically has included so many zigs and zags that the prototypical pathway from A to Z was less of a straight line than a more of a sideways fever chart.

Today, workflow, as a concept, is gaining traction in software circles, with the universal push to increase businesses’ speed, agility and focus on the customer. It’s emerging as a key component in an advanced discipline called continuous delivery that enables organizations to conduct frequent, small updates to apps so companies can respond to changing business needs.

So, how does workflow actually work in continuous delivery environments? How do companies make it happen? What kinds of pains have they experienced that have pushed them to adopt workflow techniques? And what kinds of benefits are they getting?

To answer these questions, it makes sense to look at how software moves through a continuous delivery pipeline. It goes through a series of stages to ensure that it’s being built, tested and deployed properly. While organizations set up their pipelines according to their own individual needs, a typical pipeline might involve a string of performance tests, Selenium tests for multiple browsers, Sonar analysis, user acceptance tests and deployments to staging and production. To tie the process together, an organization would probably use a set of orchestration tools such as the ones available in Jenkins.

Assessing your processes

Some software processes are simpler than others. If the series of steps in a pipeline is simple and predictable enough, it can be relatively easy to define a pipeline that repeats flawlessly – like a factory running at full capacity.

But this is rare, especially in large organizations. Most software delivery environments are much more complicated, requiring steps that need to be defined, executed, revised, run in parallel, shelved, restarted, saved, fixed, tested, retested and reworked countless times.

Continuous delivery itself smooths out these uneven processes to a great extent, but it doesn’t eliminate complexity all by itself. Even in the most well-defined pipelines, steps are built in to sometimes stop, veer left or double back over some of the same ground. Things can change – abruptly, sometimes painfully – and pipelines need to account for that.

The more complicated a pipeline gets, the more time and cost get piled onto a job. The solution: automate the pipeline. Create a workflow that moves the build from stage to stage, automatically, based on the successful completion of a process – accounting for any and all tricky hand-offs embedded within the pipeline design.

Again, for simple pipelines, this may not be a hard task. But, for complicated pipelines, there are a lot of issues to plan for. Here are a few:

  • Multiple stages – In large organizations, you may have a long list of stages to accommodate, with some of them occurring in different locations, involving different teams.
  • Forks and loops – Pipelines aren’t always linear. Sometimes, you’ll want to build in a re-test or a re-work, assuming some flaws will creep in at a certain stage.
  • Outages – They happen. If you have a long pipeline, you want to have a workflow engine ensure that jobs get saved in the event of an outage.
  • Human interaction – For some steps, you want a human to check the build. Workflows should accommodate the planned – and unplanned – intervention of human hands.
  • Errors – They also happen. When errors crop up, you want an automated process to let you restart where you left off.
  • Reusable builds – In the case of transient errors, the automation engine should allow builds to be used and re-used to ensure that processes move forward.

In the past, software teams have automated parts of the pipeline process using a variety of tools and plugins. They have combined the resources in different ways, sometimes varying from job to job. Pipelines would get defined, and builds would move from stage to stage in a chain of jobs — sometimes automatically, sometimes with human guidance, with varying degrees of success.

As the pipeline automation concept has advanced, new tools are emerging that program in many of the variables that have thrown wrenches into more complex pipelines over the years. Some of the tools are delivered by vendors with big stakes in the continuous delivery process – known names like Chef, Puppet, Serena and Pivotal. Other popular continuous delivery tools have their roots in open source, such as Jenkins.

While we are mentioning Jenkins, the community recently introduced functionality, specifically to help automate workflows. Jenkins Pipeline (formerly known as Workflow) gives a software team the ability to automate the whole application lifecycle – simple and complex workflows, automation processes and manual steps. Teams can now orchestrate the entire software delivery process with Jenkins, automatically moving code from stage to stage and measuring the performance of an activity at any stage of the process.

Conclusion
Over the last 10 years continuous integration brought tangible improvements to the software delivery lifecycle – improvements that enabled the adoption of agile delivery practices. The industry continues to evolve. Continuous delivery has given teams the ability to extend beyond integration to a fully formed, tightly wound delivery process drawing on tools and technologies that work together in concert.

Pipeline brings continuous delivery forward another step, helping teams link together complex pipelines and automate tasks every step of the way. For those who care about software, workflow means business.

This blog entry was originally posted on Network World.

 

 

Blog Categories: Jenkins
Categories: Companies

Dynatrace partners, HCL Technologies & CSC stand tall in new Gartner report

Here at Dynatrace, we were pretty excited (but not completely surprised) to see a number of our partners making the A-list in Gartner’s newly launched Magic Quadrant report, which identifies the top 20 Public Cloud Infrastructure Managed Service Providers Worldwide.

Accenture, HCL, CSC, Capgemini, Melbourne IT, Bulletproof, Rackspace, Wipro and Infosys are among our partners that made the list. I specifically called out HCL and CSC because it just so happens that we’ve been active in our joint marketing efforts recently.

HCL Technologies CTO – Kalyan Kumar

I recently caught up with Kalyan (@kklive) at their UK office, to get the lowdown on why they chose to partner with Dynatrace.

Check out the video here.

For me, the most exciting takeaways from Kalyan’s interview was hearing just how forward thinking HCL’s strategy is – they’re adopting AI, robotics and machine learning technologies every day to drive improved services and build better products for their customers.

And I’m proud to say that Dynatrace plays a critical role in HCL’s ability to provide a full stack application monitoring solution through our integration into their service offering DryIce. If you listen in the video you’ll hear Kalyan reference one of the world’s leading brands – Manchester United.

Speaking of the future…

Recently at our global Perform event in Las Vegas, I had the pleasure of interviewing both Kalyan Kumar from HCL and JP Morganthal from CSC, about the big trends impacting digital delivery for businesses tomorrow.

Have a quick look and listen to our on-stage chat here.

For me, the discussion brought home some great points that underscore our unified monitoring mandate here at Dynatrace – to see every user, across every application, everywhere:

Our focus should be on outcomes, not data

“It’s not about the nuts and bolts. Too much data hits operations. Leading them to question, what does it mean? What do you do with it? You need to focus on outcomes. Show me where the issue is. What do I need to focus on?” – Kalyan Kumar

Visibility is of the utmost importance

“Managing distributed apps is really complex and there are very few tools out there that really focus on understanding all of the connection points and the flow of communications and dependencies. That’s critical to being able to understand how to troubleshoot a problem when something occurs and to understand the health of a distributed app.” – JP Morganthal

Cultural change is here

“We’re departing from infrastructure operations monitoring. As the cloud comes in, and as we get commoditized hardware, what we’re seeing … there is a gradual shift towards an application centric universe and it’s really beginning to change things and the way people think.” – JP Morganthal

AI is the answer to the complexity challenge

“Application complexity leads to a situation where human IT operations is no longer possible. Bring in artificial, augmented intelligence… let the system handle the complexity and provide the insights.” – Kalyan Kumar

 And the winner is…

With the rate of innovation at HCL exceeding expectation, it’s no wonder this year we awarded Kalyan and his team at HCL the Dynatrace R&D Mover and Shaker award for being the most innovative development partner of 2017.

Big thanks to HCL and CSC for partnering with Dynatrace – we applaud your tireless efforts to innovate and succeed for your customers.

The post Dynatrace partners, HCL Technologies & CSC stand tall in new Gartner report appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

It’s a Small (Testing) World After All

PractiTest - Wed, 03/22/2017 - 16:22

Hello Testers of the Free World

Categories: Companies

DevSecOps: A More Deterministic Approach

Sonatype Blog - Wed, 03/22/2017 - 15:00
Is security an inhibitor to DevOps agility? To answer this question we would need to take a quick look at differences between DevOps, QA and Security when it comes to automation issues.

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

DevSecOps: In Time for Security

Sonatype Blog - Wed, 03/22/2017 - 15:00
Changing Mindsets. Historically developers have prioritized functional requirements over security when building software.  While secure coding practices important, they have often fallen into secondary or tertiary requirements for teams building applications against a deadline.

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

New Features for HPE Jenkins Plugin v5.1 Including Enhanced Integration with UFT 14.00 & ALM Octane

HP LoadRunner and Performance Center Blog - Wed, 03/22/2017 - 11:14

Introducing the new features of HPE Jenkins plugin version 5.1 including enhanced integration with UFT 14.00 and ALM Octane. Learn more…Jenk1.png

 

 

 

 

 

Categories: Companies

Automated naming of mobile user-actions & grouping of web requests

Keeping up with new mobile-app features has traditionally been a real challenge when relying on manual instrumentation. Automatic instrumentation has proven to be extremely efficient at quickly instrumenting mobile apps—without requiring manual configuration of source code. With auto-instrumentation, you’re guaranteed of instrumenting all of your mobile-app’s new features as they come online. One downside of automatic instrumentation is that, in some cases, automatic detection and naming of user actions and grouping of web requests has been less than optimal. This issue is addressed in the latest release of Dynatrace. Extraction rules that automatically group and aggregate web-request metrics can also now be defined using regular expressions.

Set up user-action naming rules

The mobile application page below shows a typical user action captured by Dynatrace OneAgent. The highlighted AppStart user action represents the startup of the app. The name of the app (easyTravel) is included in parentheses.

user actions

To create naming rules for mobile user actions
  1. Select Applications from the navigation menu.
  2. Select your mobile application.
  3. Click the Browse (…) button.
  4. Click Edit.
  5. Select User actions.
  6. Click the Add naming rule button.

Three types of naming rules are available to clean or extract specific information from your auto-detected mobile user actions and web requests:
Cleanup rules, naming rules, and extraction rules.

user actions

User-action naming example

In this first example, we’ll use a naming rule to rename the auto-generated AppStart (easyTravel) user action to Startup.

The naming rule shown below states that all user action names beginning with the string AppStart are to be renamed and grouped under the name Startup. By clicking the Preview button, the actual incoming stream of user actions is retrieved and the effects of the new rule are displayed for you in a preview further down the page.

user actions

Extraction rule example

Another useful approach to automated user-action naming involves setting up extraction rules via regular expressions. Extraction rules are used to replace variable web-request URL elements (for example, session data, product IDs, or GUIDs) with fixed strings. With variable elements replaced with fixed strings, the resulting web requests can be grouped correctly. In the process, all web request response-time and error-rate metrics can also be aggregated correctly.

To group web requests that have variable elements, and therefore to correctly aggregate all their response time and error rate metrics, it’s necessary to define specific extraction rules, as below. An extraction rule can be defined using a regular expression that selects and replaces the variable part of a URL with a fixed string. In the example below, the variable GUID values following the /feed/ subpath will be replaced with the fixed path /feed/*/. The asterisk symbol (*) is a wildcard that represents all available GUIDs.

As a result of this rule, all calls to the API endpoint /feeds/ will be grouped into a single group.

Variable API endpoints:

/feeds/42424224343423423432/

/feeds/33453345345353453453/

/feeds/32342423424234234243/

Resulting fixed API endpoint:

/feeds/*/

You can see the results in the Preview this rule section in the example below.

The post Automated naming of mobile user-actions & grouping of web requests appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Seamless ServiceNow CMDB sync & problem detection

Managing highly dynamic service and application infrastructures with a CMDB database can be cumbersome and error prone. Modern microservices infrastructures commonly contain thousands of individual business-critical services and related dependencies. Dynatrace automatically discovers and monitors all such services and applications in real time, detects deviations from normal behavior (availability, performance, and/or errors), and synchronizes this data with your ServiceNow instance.

Dynatrace anomaly detection

The first step toward gaining full technology insight into each individual service request of each of your customers is to install Dynatrace OneAgent on your hosts. Once installed, OneAgent automatically discovers all software services and applications running on your host and detects all communication relationships between your services in real time.
Dynatrace immediately calculates a multi-dimensional baseline with up to 10K cells for each service and application in your environment. This baseline allows Dynatrace to automatically detect degradations from normal behavior and inform you about complex problems and any impact on customer experience. Problems include detail related to all affected services and applications, their relationships, as well as root cause information that is correlated with each individual service.

Dynatrace ServiceNow CMDB synchronization

With the new Dynatrace ServiceNow CMDB synchronization application, all auto-discovered hosts, applications, and services—along with their relationships—can be synchronized with your ServiceNow ITIL CMDB database.
The main benefits of seamless integration between Dynatrace and your ServiceNow instance are:

  • Automatic synchronization of auto-detected services and applications, along with their used_by relationships, in real time.
  • Automatic synchronization of monitored hosts and virtual machines, along with their attributes.
  • Automatic push of Dynatrace-detected problems in your monitored infrastructure to your ServiceNow incidents list.
  • Automatic linking of detected problems with all affected CMDB CIs.

The image below compares how the synchronization of a Dynatrace-discovered application and its relationships to auto-discovered business-critical services looks in Dynatrace Smartscape and how it looks when synchronized within the ServiceNow dependency map.

Another benefit for ServiceNow users is detailed descriptions of all application dependencies within each architectural layer, as shown below.

When Dynatrace discovers an availability, performance, or error-related problem within your environment, the problem is pushed to your ServiceNow instance and automatically mapped with previously synchronized CMDB CI elements, as shown below.

To fully benefit from Dynatrace state of the art AI-powered technology monitoring with ServiceNow, head over to the ServiceNow app store and install the Dynatrace Monitoring and CMDB Integration application.
For more detail on synchronizing Dynatrace monitoring with ServiceNow, see How do I set up ServiceNow problem notifications?

The post Seamless ServiceNow CMDB sync & problem detection appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

Improved design & functionality of Service pages

Over the past months, we’ve added numerous enhancements to our Service overview pages. To maximize the value of these enhancements, it became necessary to build an entirely new page design. We’re now proud to present all the new views and features that the enhanced Service overview page design has to offer.

Service overview page

As you can see from the image below, the service overview page has received a major design overhaul. In addition to a redesigned service infographic and new features, the chart area has been reduced to several small trendline charts. In this way, more monitoring data is now visible at a glance, while the details are still accessible by clicking on the trend charts.

To view the new service overview page
  1. Select Transactions & services from the navigation menu.
  2. Select a service from the list.
Improved service infographic

The new service infographic (see below) follows the same logic and includes the same high-level detail that was included in the previous design. Just click an infographic tile to view detail regarding caller and callee services, real user experience, and performance metrics for related services and databases.

The new infographic also provides information about the processes and hosts that a service runs on. Availability status and detail regarding recent calls is included for both Calling services and Processes and hosts.

The service overview infographic now also explicitly states when a service receives traffic over the network that is not explicitly monitored by Dynatrace. The Network clients box (see below) appears when such traffic is detected.

Load balancer and proxy chains

Another major enhancement on the newly designed service overview page is the inclusion of proxy and load balancer data. Just click the calling Applications tile or the Network clients tile to see information about related proxies and load balancers (see below).

Dynatrace detects proxies and load balancers that exist between services—for example, when a web server directs traffic to your application server, but a load balancer operates in front of the web server (as is the case with Amazon Elastic Load Balancer). Dynatrace detects and monitors each of these components and even resolves the processes that perform the load balancing!

This is not only useful for understanding the topology and dependencies in your environment. In an upcoming release, this monitoring functionality will enable Dynatrace to understand when availability or performance issues in your load balancer impact your environment. Stay tuned for this enhancement.

Trend charts

The new chart section is smaller. It includes trendline charts that turn red when a problem is detected. To view further details, click a trendline chart, or click the View dynamic requests button to access the new service Details page.

Service details page

The service Details page has also been vastly improved and provides much more information (see below). Each of the metric tabs now provides much more detail.

Most significantly, we’ve increased the chart resolution across the entire product. This gives you deeper visibility into small spikes and performance variations. For each request, you can now also view the Slowest 5% (95th percentile), a Failure rate chart, HTTP errors chart, and a CPU consumption chart. Clicking in the chart will show a vertical line and the numbers in the tables below will change accordingly; they will always reflect that line.

Improved service-instance support

If you run multiple instances of the same service across separate hosts, you’ll see a funnel view in the Server response time chart (see below). This funnel represents the worst and best of your instances at each point in time. If you see spikes in the funnel, but not in the overall response time, only a minority of the instances (or even a single instance) has experienced a response-time spike. When this is the case, you should take a deeper look at the breakdown of the specific service instance that’s experiencing the issue.

Notice how the top chart in the example below shows a spike in the funnel at 16:10, but not in the overall median response time? When you click the chart at that position and look at the instance break down, you see that one of the instances is much slower than the others. Click this instance to view more detail about this specific instance.

service overview

Client-side response time

Interestingly, many services reveal a totally new perspective in the Server response time chart when viewed from the client side. The example below shows response time and failure rate as perceived by the calling process on the client side.

Much more…

You also have access to all the standard analysis features that Dynatrace provides, in the context of the selected timeframe and metric. Also, notice that you can view all requests that were processed during a selected timeframe in the Top web request list. There is now a separate list here to make your key requests easier to find.

The post Improved design & functionality of Service pages appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

How IT in the age of digital transformation is like the space race

Next month will mark the 47th anniversary of Apollo 13. The film that recounts the story is a favorite of mine, probably because it so masterfully captures the incredible suspense, fear, and hope felt by people everywhere, that I personally recall very well. I also know how an organization’s digital transformation can generate similar reactions!

Apollo 13 was likely a casualty of the space program’s incredibly aggressive schedule. NASA, with U.S. political and military leaders, was motivated by competition from the U.S.S.R. and fear of the potential consequences of not being first. Urgency often overruled caution. A good plan today is better than a perfect plan tomorrow.

“First” is a competitive imperative in business, too, creating an urgency around digital innovation. But unlike the space race, the end goal of “first” for business isn’t simply getting there, it’s the impact you create once you arrive. Strategic apps need to work correctly more than once.

Shared visibility into problems is key

Launch day produces sweaty palms in both scenarios. Stuff can – and does – go wrong. In both cases, the initial questions are the same: “What happened?  What’s the status?” In the film, NASA engineers struggle to remain calm as disaster unfolds on their consoles. Each of them is monitoring a different aspect of the mission – flight operations, the Odyssey space craft, the astronauts themselves. Collectively, they cannot see what is going on. Sound familiar?

In this respect, the metaphor is all too valid for modern IT Operations. Often, teams have little visibility into exactly what is happening when a problem disrupts service delivery. They struggle to quickly assess the scope, parse the data from separate components, reconcile disparate conclusions, and ultimately (hopefully) determine the root of the problem.

Is failure not an option, or is it inevitable?

In the movie, when discussing options for rescuing the three astronauts, Flight Director Gene Kranz tells his team, “Failure is not option.” The same is true for strategic digital business initiatives. But most IT professionals would admit that failure is practically an inevitability in the systems that deliver them. In fact, viewed in terms of the user’s experience, malfunctions are frequent, like intermittent slow response, images that don’t load, failed connections to third parties that cause a transaction to hang – the list goes on.

So, to what extent are these “degrees of failure” viewed as impacting the success of the initiative itself? In practice, probably not at all, because in most organizations, it’s not observed. If you are not monitoring IT service delivery for interruptions and degradations – and further, the related impact on conversions, registrations, enrollments, payments, etc. – you don’t even know there is a failure, until it is catastrophic. But what you don’t know can hurt you. Houston, we have a problem…

Shared visibility, increased manageability

Embedding better visibility and manageability into their operations wasn’t feasible for NASA in the 1960’s.  The computer in the Apollo 13 command module had 64Kbytes of memory and operated at 0.043MHz; my iPhone 6 is 32,600 times more powerful than each of the IBM mainframes at the Goddard Space Flight Center. That brings to my mind another scene from the film. Flight operations realizes that they urgently need to recalculate the space craft’s re-entry trajectory. In unison, several guys reach behind their keyboards for their slide rules. Yes, slide rules! They did not have an interactive application that would automate the complex calculations.

By comparison, today’s IT organizations have all the computing power in the world; much can be done with modern application management technology to respond to, and even prevent, service delivery degradation and failures. The dependency of business results on IT operations variables can be observed, and even inform IT objectives and practices. It’s not rocket science. To ensure the success of strategic (and costly) digital initiatives, APM is essential.

The post How IT in the age of digital transformation is like the space race appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

DevSecOps: Slaying the Myths of Container Security

Sonatype Blog - Tue, 03/21/2017 - 11:06
Containers are clearly appealing for companies and development teams who want to deliver and iterate on their software faster and efficiently. This is achieved through more consistent, simple and repeatable deployments, rapid rollback, and simpler ways of orchestrating and scaling distributed...

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

DevSecOps: Integrating Automated Security Controls

Sonatype Blog - Tue, 03/21/2017 - 11:05


To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

DevSecOps: Embracing Automation While Letting Go of Tradition

Sonatype Blog - Tue, 03/21/2017 - 11:04
While I am all for traditions like Thanksgiving turkey and Sunday afternoon football, holding onto traditions in your professional life can be career limiting. The awesome thing about careers in technology is that you constantly have to be on your front foot.  Because when you’re not, someone,...

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

Pipeline Workshop & Hackergarten @ ToulouseJAM Feedback

Earlier this month, a full-day event about Jenkins Pipeline was organized in Toulouse, France with the Toulouse JAM. After a warm-up on the previous Tuesday where Michaël Pailloncy had given a talk at the local Toulouse Devops user group about Jenkins Pipeline ecosystem, we were ready for more digging :-). The agenda We had planned the day in two parts: Morning would be a more driven workshop with slides & exercises to be completed Pizzas & beverages to split the day :-) Afternoon would be somehow like an Unconference, where people basically decide by themselves what they want to work on. We planned to have 30 attendees....
Categories: Open Source

Looking for the Right Testers Skills

Software Testing Magazine - Mon, 03/20/2017 - 18:09
It is usual that software testers job descriptions will often mainly emphasize the business and technical requirements: experience working in the banking sector, Agile Testing, Selenium, etc. As in...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

New AI-based Software Testing Tool with a Trick: ReTest 1.0

Software Testing Magazine - Mon, 03/20/2017 - 18:01
The German start-up “ReTest” brings artificial intelligence (AI) into software testing. To this end, it propagates an innovative testing approach, which is a combination of...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Security updates for multiple Jenkins plugins

Multiple Jenkins plugins received updates today that fix several security vulnerabilities: Active Directory Distributed Fork Email Extension (Email-ext) Mailer SSH Slaves For an overview of what was fixed, see the security advisory. Additionally, we also published a security notice for the following plugin and recommend that users disable and uninstall it: Pipeline: Classpath Step This plugin is not part of the Pipeline suite of plugins, despite its name. It’s installed on just several hundred instances. Subscribe to the jenkinsci-advisories mailing list to receive important notifications related to Jenkins security....
Categories: Open Source

The Gift of Feedback (in a Booklet)

thekua.com@work - Sun, 03/19/2017 - 20:00

Receiving timely relevant feedback is an important element of how people grow. Sports coaches do not wait until the new year starts to start giving feedback to sportspeople, so why should people working in organisations wait until their annual review to receive feedback? Leaders are responsible for creating the right atmosphere for feedback, and to ensure that individuals receive useful feedback that helps them amplify their effectiveness.

I have given many talks on the topic and written a number of articles on this topic to help you.

However today, I want to share some brilliant work from some colleagues of mine, Karen Willis and Sara Michelazzo (@saramichelazzo) who have put together a printable guide to help people collect feedback and to help structure witting effective feedback for others.

Feedback Booklet

The booklet is intended to be printed in an A4 format, and I personally love the hand-drawn style. You can download the current version of the booklet here. Use this booklet to collect effective feedback more often, and share this booklet to help others benefit too.

Categories: Blogs

A Field of My Stone

Hiccupps - James Thomas - Sat, 03/18/2017 - 09:04

The Fieldstone Method is Jerry Weinberg's way of gathering material to write about, using that material effectively, and using the time spent working the material efficiently. Although I've read much of Weinberg's work I'd never got round to Weinberg on Writing until last month, and after several prompts from one of my colleagues.

In the book, Weinberg describes his process in terms of an extended analogy between writing and building dry stone walls which - to do it no justice at all - goes something like this:
  • Do not wait until you start writing to start thinking about writing.
  • Gather your stones (interesting thoughts, suggestions, stories, pictures, quotes, connections, ideas) as you come across them. 
  • Always have multiple projects on the go at once. 
  • Maintain a pile of stones (a list of your gathered ideas) that you think will suit each project.
  • As you gather a stone, drop it onto the most suitable pile.
  • Also maintain a pile for stones you find attractive but have no project for at the moment.
  • When you come to write on a project, cast your eyes over the stones you have selected for it.
  • Be inspired by the stones, by their variety and their similarities.
  • Handle the stones, play with them, organise them, reorganise them.
  • Really feel the stones.
  • Use stones (and in a second metaphor they are also periods of time) opportunistically.
  • When you get stuck on one part of a project move to another part.
  • When you get stuck on one project move to another project.

The approach felt extremely familiar to me. Here's the start of an email I sent just over a year ago, spawned out of a Twitter conversation about organising work:
I like to have text files around [for each topic] so that as soon as I have a thought I can drop it into the file and get it out of my head. When I have time to work on whatever the thing is, I have the collected material in one place. Often I find that getting material together is a hard part of writing, so having a bunch of stuff that I can play with, re-order etc helps to spur the writing process.For my blogging I have a ton of open text files:


You can see this one, Fieldstoning_notes.txt and, to the right of it, another called notes.txt which is collected thoughts about how I take notes (duh!) that came out of a recent workshop on note-taking (DUH!) at our local meetup.

I've got enough in that file now to write about it next, but first here's a few of the stones I took from Weinberg on Writing itself:

Never attempt to write what you don’t care about.

Real professional writers seldom write one thing at a time.

The broader the audience, the more difficult the writer’s job.

Most often [people] stop writing because they do not understand the essential randomness involved in the creative process.

... it’s not the number of ideas that blocks you, it’s your reaction to the number of ideas.

Fieldstoning is about always doing something that’s advancing your writing projects.

The key to effective writing is the human emotional response to the stone.

If I’ve been looking for snug fits while gathering, I have much less mortaring to do when I’m finishing

Don’t get it right; get it written.

"Sloppy work" is not the opposite of "perfection." Sloppy work is the opposite of the best you can do at the time.
Categories: Blogs

Pairing For Learning – Across the Team

Agile Testing with Lisa Crispin - Fri, 03/17/2017 - 02:32

I’ve written a lot about pairing over the years, most recently about strong-style pairing with others on my team. Pairing is an excellent way to transfer skills, it offers a lot of advantages for overcoming cognitive biases when testing, and it’s just plain fun.

For most of the 4.5 years I’ve been on my current team, I haven’t been able to pair with developers as much as I would like. For one thing, the developers pair with each other 100% of the time. Also, I suspect that the dev managers worried that testers will slow their developers down even if they’re soloing because the pod is odd that day.

On the pairing journeyOn the pairing journey

Fortunately, the development managers also understand the value of exploratory testing, and want developers to improve their ET skills. I’ve written about ways we have helped non-testing team members learn exploratory testing skills. The workshops and other efforts helped, but developers felt they needed to learn more. Our team is moving towards continuous delivery, and the managers feel that developers need to step up their exploratory testing at the story level to mitigate the risk of bad issues getting out in production. Our team embarked on an experiment: each tester should pair with a developer at least one day a week.

Experiment underway

Squeal! I get to pair not only with other testers, product managers and designers, but also with developers. Our team is divided into several vertical “pods”, and as we are so few testers, each of us has to help on two or more pods. I was pleasantly surprised that “my” pods embraced this experiment from the start. It also happened that for various reasons, my pods were “odd”, there was one developer each day who would have to solo. Instead of solo-ing, they paired with me! Not only were they ok with this, they were actually eager to do it. One day recently, two pods were vying to have me pair with a dev!

The main intent of the experiment was to help devs learn how to write exploratory testing charters and execute exploratory testing. In practice, this has meant everything from writing charters, doing ET charters, and simply working on stories.

Doing exploratory testing activities together has the expected benefits. The devs learn good techniques for writing charters (we use Elisabeth Hendrickson’s template from her book Explore It! ), useful exploring resources such as personas and heuristics, the importance of reporting what testing they did and what they learned. I think that we testers help the devs think beyond the happy path.

Pairing on “production code” too! Do uncomfortable things in pairsGreat advice from Mike Sutton. See https://www.slideshare.net/mike.sutton/the-power-of-communities-of-practice-in-testing for more!

I’ve found pairing on story work surprisingly valuable. For one thing, I have new insight into what the developers’ job is like – it’s not easy! I get to watch them test-drive their code (they do offer to let me drive, but they are so freaking fast in their IDEs, I don’t know all those shortcuts! But I will work up the nerve eventually!) and they explain their thought process as they go. I’m learning a lot about our app’s architecture and reasons behind behavior I observe in testing, such as performance issues. As they write unit tests, I might suggest another test case, and hear “Oh, good, I didn’t think of that!” Or I might ask why a particular test is using double negatives (one of my pet peeves), and that turns out to be a helpful suggestion.

My patient teammates transfer lots of their skills to me. I’ve learned some new git parameters, I’ve learned a lot about using browser dev tools to debug CSS and other valuable activities, I’ve learned a little about BEM. I’m being exposed to lots of new things that help me understand our coding standards and process better, which I think will help me do a better job of testing.

Our app is mostly Rails and JS, but the team is also starting to code some pages in Elm. Pairing with a dev writing Elm code was rather mind-bending. Elm code prevents runtime exceptions by detecting issues during compilation and giving friendly hints on how to correct them.

We are fortunate that our developers pair at least 7 hours a day. We have pair workstations, each fitted out with an iMac, a 27″ Thunderbolt monitor, with mirrored displays, two keyboards and two mice. People move around every day, so if they have their own favorite keyboard and/or mouse they just carry it with them. This makes pairing comfortable – no craning your neck to see what the other person is doing, no weird personal space issues. If your work area doesn’t have comfortable pair workstations, see if you can set up at least one pairing area that pairs can use.

Good for what ails you

I’ve always been a fan of pairing, though I also am subject to the same impediments I hear other people cite. “We have too much work to do, we should divide and conquer”. “I will slow that person down too much because she’ll have to explain everything to me.” “My true nature of being an imposter will be exposed.” When I can overcome those excuses, I find pairing powerful for so many purposes.

Pairing is caringPairing is caring

Are you a tester on a team where the programmers are using poor coding practices and throwing code over the wall, expecting you to find all the bugs? Find the friendliest programmer, work up your courage, and go ask her if she will pair with you for an hour to test a feature, or write automated tests. Whatever success that brings, keep building on it. You will start building relationships that will let you get your whole delivery team engaged in solving testing problems.

If, like me, you find it hard to stop the merry-go-round of daily routine and make time to pair, put it on the calendar. Pick one day a week to pair with a tester, coder, designer, PO, BA, whomever. Once when another tester and I were finding it hard to make time for pairing, we added a daily one-hour meeting to our calendars and stuck to it as much as possible (I blogged about that experience too). I’ve also paired with total strangers who volunteered to pair with me via Twitter, with terrific results!

Take a baby step, pair with someone for an hour today. At the end of your hour, do a mini-retrospective together to discuss the benefits and disadvantages you experienced. Keep iterating and see if the benefits outweigh the downsides. (When I really do pair – I find no downsides). It’s a great way to learn!

The post Pairing For Learning – Across the Team appeared first on Agile Testing with Lisa Crispin.

Categories: Blogs

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today