Skip to content

Feed aggregator

Tools: Take Your Pick Part 3

Hiccupps - James Thomas - Tue, 08/30/2016 - 05:47


In Part 1 of this series I observed my behaviour in identifying problems, choosing tools, and finding profitable ways to use them when cleaning my bathroom at home. The introspection broadened out in Part 2 to consider tool selection more generally. I speculated that, although we may see someone apparently thoughtlessly viewing every problem as a nail and hence treatable with the same hammer, that simple action can hide deeper conscious and unconscious thought processes. In Part 3 I find myself with these things in mind, reflecting on the tools I use in my day-to-day work.

One class of problems that I apply tools to involves a route to the solution being understood and a desire to get there quickly. I think of these as essentially productivity or efficiency problems and one of the tools I deploy to resolve them is a programming or scripting language.

Programming languages are are tools, for sure, but they are also tool factories. When I have some kind of task which is repetitive or tiresome, or which is substantially the same in a bunch of different cases, I'll look for an opportunity to write a script - or fabricate a tool - which does those things for me. For instance, I frequently clone repositories from different branches of our source code using Mercurial. I could type this every time:

$ hg clone -r branch_that_I_want https://our.localrepo.com/repo_that_I_want

... and swear a lot when I forget that this is secure HTTP or mistype localrepo again. Or I could write a simple bash script, like this one, and call it hgclone:

#!/bin/bash

hg clone -r $1 https://our.localrepo.com/$2

and then call it like this whenever I need to clone:

$ hgclone branch_that_I_want repo_that_I_want

Now I'm left dealing with the logic of my need but not the implementation details. This keeps me in flow (if you're a believer in that kind of thing) or just makes me less likely to make a mistake (you're certainly a believer in mistakes, right?) and, in the aggregrate, saves me significant time, effort and pain.

Your infrastructure will often provide hooks for what I sometimes think of as micro tools too. An example of this might be aliases and environment variables. In Linux, because that's what I use most often, I have set things up so that:
  • commands I like to run a particular way are aliased to always run that way.
  • some commands I run a lot are aliased to single characters.
  • some directory paths that I need to use frequently are stored as environment variables.
  • I can search forwards and backwards in my bash history to reuse commands easily.

One of the reasons that I find writing (and blogging, although I don't blog anything like as much as I write) such a productive activity is that the act of doing it - for me - provokes further thoughts and connections and questions. In this case, writing about micro tools I realise that I have another kind of helper, one that I could call a skeleton tool.

Those scripts that you return to again and again as starting points for some other piece of work, they're probably useful because of some specific piece of functionality within them. You hack out the rest and replace them in each usage, but keep that generally useful bit. That bit is the skeleton. I have one in particular that is so useful I've made a copy of it with only the bits that I was reusing to make it easier to hack.

Another class of problem I bump into is more open-ended. Often I'll have some idea of the kind of thing I'd like to be able to do because I'm chasing an issue. I may already have a tool but its shortcomings, or my shortcomings as a user, are getting in the way. I proceed here in a variety of ways, including:
  • analogy: sometimes I can think of a domain where I know of an answer, as I did with folders in Thunderbird.
  • background knowledge: I keep myself open for tool ideas even when I don't need tools for a task. 
  • asking colleagues: because often someone has been there before me.
  • research: that frustrated lament "if only I could ..." is a great starting point for a search. Choosing sufficient context to make the search useful is a skill. 
  • reading the manual: I know, old-fashioned, but still sometimes pays off.

On one project, getting the data I needed was possible but frustratingly tiresome. I  had tried to research solutions myself, had failed to get anything I was happy with, and so asked for help:
#Testers: what tools for monitoring raw HTTP? I'm using tcpdump/Wireshark and Fiddler. I got networks of servers, including proxies #testing— James Thomas (@qahiccupps) March 26, 2016 This lead to a couple of useful, practical findings: that Fiddler will read pcap files, and that chaosreader can provide raw HTTP in a form that can be grepped. I logged these findings in another tool - our company wiki - categorised so that others stand a chance of finding them later.

Re-reading this now, I notice that in that Twitter thread I am casting the problem in terms of the solution that I am pursuing:
I would like a way to dump all HTTP out of .pcap. Wireshark cuts it up into TCP streams. Later, I recast the problem (for myself) in a different way:
I would like something like tcpdump for HTTP.The former presupposes that I have used tcpdump to capture raw comms and now want to inspect the HTTP contained within it, because that was the kind of solution I was already using. The latter is agnostic about the method, but uses analogy to describe the shape of the solution I'm looking for. More recently still, I have refined this further:
I would like to be able to inspect raw HTTP in real time, and simultaneously dump it to a file, and possibly modify it on the fly, and not have to configure my application to use an external proxy (because that can change its behaviour).Having this need in mind means that when I happen across a tool like mitmproxy (as I did recently) I can associate it with the background problem I have. Looking into mitmproxy, I bumped into HTTPolice, which can be deployed alongside it and used to lint my product's HTTP.  Without the background thinking I might not have picked up on mitmproxy when it floated past me; without picking up on mitmproxy I would not have found HTTPolice or, at least, not found it so interesting at that time.

Changing to a new tool can give you possibilities that you didn't know were there before. Or expose a part of the space of possible solutions that you hadn't considered, or change your perspective so that you see the problem differently and a different class of solutions becomes available.

Sometimes the problem is that you know of multiple tools that you could start a task in, but you're unsure of the extent of the task, or the time that you'll need to spend on it, whether you'll need to work and rework or this is a one-shot effort and other meta problems of the problem itself. I wondered about this a while ago on Twitter:
With experience I become more interested in - where other constraints permit - setting up tooling to facilitate work before starting work.— James Thomas (@qahiccupps) December 5, 2015
And where that's not possible (e.g. JFDI) doing in a way that I hope will be conducive to later retrospective tooling.— James Thomas (@qahiccupps) December 5, 2015
And I mean "tooling" in a very generic sense. Not just programming.— James Thomas (@qahiccupps) December 5, 2015
And when I say "where other constraints permit" I include contextual factors, project expectations, mission, length etc not just budget— James Thomas (@qahiccupps) December 5, 2015
Gah. I should've started this at https://t.co/DWcsnKiSfS. Perhaps tomorrow.— James Thomas (@qahiccupps) December 5, 2015
I wonder if this is irony.— James Thomas (@qahiccupps) December 5, 2015 A common scenario for me at a small scale is, when gathering data, whether I should start in text file, or Excel, or an Excel table. Within Excel, these days, I usually expect to switch to tables as soon as it becomes apparent I'm doing something more than inspecting data.

Most of my writing starts as plain text. Blog posts usually start in Notepad++ because I like the ease of editing in a real editor, because I save drafts to disk, because I work offline. (I'm writing this in Notepad++ now, offline because the internet connection where I am is flaky.) Evil Tester wrote about his workflow for blogging and his reasons for using offline editors too.

When writing in text files I also have heuristics about switching to a richer format. For instance, if I find that I'm using a set of multiply-indented bullets that are essentially representing two-dimensional data it's a sign that the data I am describing is richer than the format I'm using. I might switch to tabulated formatting in the document (if the data is small and likely to remain that way), I might switch to wiki table markup (if the document is destined for the wiki), or I might switch to a different tool altogether (either just for the data or for everything.)

At the command line I'll often start in shell, then move to bash script, then move to a more sophisticated scripting language.  If I think I might later add what I'm writing to a test suite I might make a different set of decisions to writing a one-off script. If I know I'm searching for repro steps I'll generally work in a shell script, recording various attempts as I go and commenting them out each time so that I can easily see what I did that lead to what. But if I think I'm going to be doing a lot of exploration in an area I have little idea about I might be more interactive but use script to log my attempts.

At a larger scale, I will try to think through workflows for data in the project: what will we collect, how will we want to analyse it, who will want to receive it, how will they want to use it? Data includes reports: who are we reporting to, how would they like to receive reports, who else might be interested? I have a set of defaults here: use existing tooling, use existing conventions, be open about everything.

Migration between tools is also interesting to me, not least because it's not always a conscious decision. I find I've begun to use Notepad++ more on Windows whereas for years I was an Emacs user on that platform. In part this is because my colleagues began to move that way and I wanted to be conversant in the same kinds of tools as them in order to share knowledge and experience. On the Linux command line I'll still use Emacs as my starting point, although I've begun to teach myself vi over the last two or three years. I don't want to become dependent on a tool to the point where I can't work in common, if spartan, environments. Using different tools for the same task has the added benefit of opening my mind to different possibilities and seeing how different concepts repeat across tools, and what doesn't, or what differs.

But some migrations take much longer, or never complete at all: I used to use find and grep together to identify files with certain characteristics and search them. Now I often use ack. But I'll continue to use find when I want to run a command on the results of the search, because I find its -exec option a more convenient tool than the standalone xargs.

Similarly I used to use grep and sed to search and filter JSON files. Now I often use jq when I need to filter cleanly, but I'll continue with grep as a kind of gross "landscaping" tool, because I find that the syntax is easier to remember even if the output is frequently dirtier.

On the other hand, there are sometimes tools that change the game instantly, In the past I used Emacs as a way to provide multiple command lines inside a single connection to a Linux server. (Aside: putty is the tool I use to connect to Linux servers from Windows.) When I discovered screen I immediately ditched the Emacs approach. Screen gives me something that Emacs could not: persistence across sessions. That single attribute is enough for me to swap tools. I didn't even know that kind of persistence was possible until I happened to be moaning about it to one of our Ops team. Why didn't I look for a solution to a problem that was causing me pain?

I don't know the answer to that.

I do know about Remote Desktop so I could have made an analogy and begun to look for the possibility of command line session persistence. I suspect that I just never considered it to be a possibility. I should know better. I am not omniscient. (No, really.) I don't have to imagine a solution in order to find one. I just have to know that I perceive a problem.

That's a lesson that, even now, I learn over and over. And here's another: even if there's not a full solution to my problem there may be partial solutions that are improvements on the situation I have.

In Part 4 I'll try to tie together the themes from this and the preceding two posts.
Image: https://flic.kr/p/5mPY4G
Syntax highlighting: http://markup.su/highlighter
Categories: Blogs

Demos at Jenkins World 2016

At this year’s Jenkins World, our events officer Alyssa has been working to organize various activities in the "Open Source Hub" on the expo floor. Both days of the conference (Sept. 14th and 15th), during the break for lunch, there will be 15 minute demos by many of the experts helping to staff the Open Source Hub. Demo Schedule Wednesday, September 14th Time Session Details Presenter 12:15 - 12:30 Blue Ocean in Action Showcase of Blue Ocean and how it will make Jenkins a pleasure to use. Keith Zantow 12:30 - 12:45 Notifications with Jenkins Pipeline Sending information to Slack, HipChat, email and more from your Pipeline Liam Newman 12:45 - 13:00 Docker and Pipeline Learn how to use Docker inside of...
Categories: Open Source

Tools: Take Your Pick Part 2

Hiccupps - James Thomas - Mon, 08/29/2016 - 21:45

In Part 1, I described my Sunday morning Cleaning the Bathroom problem and how I think about the tools I'm using, the way I use them, and why.  In particular I talked about using a credit card as a scraper for the grotty build up around the sides of the bath and sink. On the particular Sunday that kicked this chain of thoughts off, I noticed myself picking the card up and using a corner of it vertically, rather than its edge horizontally, to remove some guff that was collecting around the base of a tap.

This is something I've been doing regularly for a while now but, before I got the scraper, it was a job I used an old toothbrush for. In Part 1 I recounted a number of conscious decisions around the way I keep the littlest room spic and span, but switching to use the card on the tap wasn't one I could recall making.

Observing myself taking a tool I'd specifically obtained for one purpose and using it for another put me in mind of this old saw:
When all you have is a hammer, everything looks like a nail.Until I looked it up1 just now I hadn't heard this saying called The Law of the Instrument nor come across the slightly subtler formulation that Wikipedia attributes to Maslow:
I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.Given the first of those two variants, it's easy to imagine that the universal application of the hammer is a mindless option - and we've all probably seen instances of that - but I think that, generally, tools are used in places where they are inappropriate or sub-optimal for a variety of reasons, and temptations, as Maslow would have it.

There are three explicit variables in play in this space: the problem, the tool, and the person using the tool. Here's one way I explored it, by considering the possible scenarios involving the tool and choice of tool, and trying to understand how the person might have got there:

I recognise the shape of the problem, and I have a tool that I know fits it
  •  ... but I use my favourite tool instead because I'm more familiar with it.
  •  ... but I use something else because of politics in the office, my boss, my colleagues, ...
  •  ... but I use something novel because I want to own this problem and be the expert in it.
  •  ... but I use something else to prevent an increase in our already large tool set.
  •  ... but I won't use it because of ethical or moral issues I have with the tool vendor.
  •  ... but I won't use it because of previous bad experiences with the tool, or others similar to it in some way.
  •  ... but the context changed since I last looked, and I didn't notice, so I'll continue to use the existing tool.
  •  ...
I recognise the shape of the problem, but I don't have a tool that I know fits it
  • ... so I'll use the tool that I have invested heavily in anyway because sunk cost fallacy
  • ... so I'll use the tool I do have that looks closest.
  • ... so I'll use the tool that I have in my hand right now because there's no context-switching cost.
  • ... so I'll continue to use the tool I am using now, despite its flaws, because I believe there is some benefit.
  • ... so I'll use a tool I do have because there's no time/budget/desire to look for others.
  • ... so I'll use something I do have because I'm uncertain of my ability to choose a new tool well.
  • ... so I'll continue to use a tool I have because I'm worried about the cost of getting a new tool wrong.
  • ... so I'll use whatever is to hand because I don't really care about doing a good job.
  • ...
I don't recognise the shape of the problem
  • ... so I'll try the tools I've got and see where they get me,
  • ... or make a tool,
  • ... or use no tool,
  • ... or try break the problem down into pieces that can be attacked with tools I do know.
  • ...

The latter class can be particularly galling because it contains two sub-cases:
  •  I don't recognise the shape of the problem, and - even if I did - I don't have a tool that fits it.
  •  I don't recognise the shape of the problem, but - if I did - I would find that I have a tool that fits it.

And much wailing and gnashing of teeth have been caused by the hindsight searchlight focusing on the second of those. Your wailing and gnashing of teeth, right? And the same is true of these scenarios:

I don't, or am not prepared to,  recognise the existence of a problem
  • ... so I make no decisions about tool use at all
  • ... which means that I might stay as I am or unconsciously drift into some other behaviour.
  • ...
I recognise that there is no problem
  • ... but I have an agenda that I am pushing and so force tool use anyway.
  • ... but I just love to try new things so I'll go ahead and use a tool for its own sake.
  • ... but I'm putting off some other work so I'll do needless work here.
  • ... but I haven't got enough to do so I'll try this tool out to look busy.
  • ...

As I enumerate these cases, I begin to think that they apply not just to the person with just the hammer, but to all of us, every time we do or not use any tool for any task.

In using any tool at all you are making a decision - implicitly or explicitly. When you enter three commands into the shell to get something to run you are either accepting that you will not use a script to do it for you, and avoid the typos, being in the wrong directory and so on, or you are missing out on the knowledge that a script could help you, perhaps because you don't care to avoid that time being spent on typing and typos.

In choosing to use the same tool that you always use for editing a file you are missing out on the chance to learn that there is something better out there. But also avoiding paying the costs of learning that new thing. Do you do that consciously?

I started trying to map these kinds of observations back onto my own exploration of ways in which I could satisfy my bathroom mission. As I did this, I came to realise that I have mostly cast the problem recognition and tool choice as something that is done from a position of knowledge of the problem. But my own experience shows me that it's less clear-cut than that.

In this area, I love Weinberg's definition of a problem:
A problem is a difference between things as desired and things as perceived.I love it not least because it reminds me that there are multiple levers that can be pulled when confronted with a problem. If I am struggling with the shape of the problem I can change it, change my view of it, change my desires about what it should be. Opening out this way in turn reminds me that exploration of the space is an incredibly useful way to begin to understand which of those levers I can and/or should be pulling: perhaps if I can remove the things that look like nails, I can even put down my hammer.
 
Sometimes I find that I can learn more about the shape of the problem by using the tools I have and discovering their weaknesses. Sometimes I can only imagine a possible solution by attempting to resolve the problem the wrong way. If I do that tacitly, deliberately, then I'm here:
I recognise the shape of the problem, but I don't have a tool that I know fits it ... so I will explore the problem space with the tools I have, deliberately experimenting and hoping to learn more about the tools, the space, the problem, myself.And I might find that I'm then saying "aha, if only I had something which could ..." or "oh, so perhaps I don't really need ..."

But this means deliberately deciding to spend some of whatever budget is available for problem solving on the meta task of understanding the problem. Stated as baldly as this it seems obvious that someone with a problem might consider that, doesn't it? But how many times have you seen something else happen?

How many times have you seen only a tiny fraction of the capacity of some tool being exploited? For anything more complicated than a hammer, it's easy not to know that there are capabilities not being used. The Law of the Instrument can be applied within tools too. If I don't know that Word can do mail merge, I might find myself buying a new tool to do it, for example.

On the other hand, creative reuse can be a good way to get additional value from an existing tool. A hammer can be used for things other than hitting a nail - as a door stop, as a lever, to encourage some seized machinery to become separated, as a counterbalance, as a pendulum weight, as a goal post, and might be a sufficiently good substitute for the "proper" tool in any given situation, at any given time. And, in fact, imagining ways to reuse a tool can itself be a useful way to get creative juices flowing.

But contexts change - the problem might alter, views of it might alter, the available tools might alter. Being open to reconsidering decisions is important in getting a good outcome. Doing nothing, reconsidering nothing - that pretty much guarantees at best standing still or perhaps applying a solution to a problem that no longer exists or applying the wrong solution to the problem that does.

Tool use is inherent in software development and the kinds of choices I've described above are being made all the time for all sorts of reasons, including those that I've given. It was interesting to me, as I enumerated these thoughts, that although in my bathroom cleaning example I have no reason to be anything other than on the level - there are no bathroom politics in our house and the stakes are not high in any dimension - and despite doing my best to be as clear to myself about what I'm thinking and trying at any given time as I can, I still proceeded to make choices unconsciously, to not take account of useful evidence, and to continue with one line of exploration past the point at which its utility was exhausted.

In Part 3 I'll try and recast these thoughts in terms of some practical examples from the world of work rather than bathroom cleaning.
Image: https://flic.kr/p/eFAoHQ

Footnote1. Given where I come from, and its traditional rivalry with Birmingham, I'm amused that the hammer that's applied to every problem is sometimes called a Birmingham Screwdriver.
Categories: Blogs

Tools: Take Your Pick Part 1

Hiccupps - James Thomas - Mon, 08/29/2016 - 21:26

It's early on a Sunday morning and I'm thinking about tools. Yes, that's right: Sunday morning; early; tools.

Sunday morning is bathroom cleaning morning for me1 and, alongside all the scrubbing, squirting, and sluicing I spend time evaluating the way I clean against the results I achieve. My goal is to do a good job (in terms of the cleanliness and appearance of cleanliness of the bathroom) balanced against expense, time and effort.

I've got a set of tools to help with this task. Some are traditional and obvious, including sponge, J-cloth, glass cleaner, bathroom cleaner, toilet brush, Marigolds, ... Some are reasonably well known but not mainstream, including citric acid crystals, squeegee, old toothbrush, ... and some are more niche, including a credit card, flannel, and car windscreen coating, ...

By tool I mean something as basic as this, from Oxford Dictionaries:
 A thing used to help perform a jobAnd I'm being generous in my interpretation of job too - pretty much anything that it is desired to accomplish is a job for the purposes of this discussion.

Harry Collins, in The Shape of Actions, distinguishes tools from proxies and novelties based on the extent to which they extend our capacity to do the job or can stand in for us in doing the job itself. I find this stimulating but I'm not going to be concerned with it here. (If you're interested, Michael Corum makes an admirable attempt to summarise what is a challenging book in this blog post. My thoughts on it are less comprehensive: here and here.)

I don't think there's any action in my cleaning the bathroom that doesn't employ some kind of tool by the definition I've chosen, so any consideration of how well I'm achieving my goal will implicitly include some element of how well the tool, and my use of the tool, is contributing to it. Which often means that I'm wondering whether I have the right set of tools and am getting the best out of them. Which is where I was on Sunday morning; cleaning the shower screen.

A few years ago, when we had a shower curtain, I didn't need to clean it every week but could wait until the soap and limescale grot had built up and then put it through the washing machine. Although this clearly meant that there was known uncleanliness most weeks it was a decent enough compromise, especially since we bought shower curtains with frosted patterns on them which were less likely to be obviously dirty. (The shower curtain is a tool.)

When we replaced the curtain with a folding glass screen, I simply transferred the shower curtain cleaning cycle to it. And that didn't work well. I found that I was spending a long time every few weeks trying to get stacks of accumulated crud off the thing. Which was very hard work. And the screen, with its clear glass, didn't hide any aspect of the problem either. In fact, it appeared to be more susceptible to splashes leaving a mark than the curtain, and it looked horrible most of the time.

So I experimented with the tools I was using. I explored different cleaning products - amongst them vinegar, newspaper, lemon juice, various glass sprays, and citric acid - and a variety of cloth and sponge applicators. I tried varying my technique, I tried varying the frequency - I now clean it every week - and I introduced the squeegee to remove some of the dirtiness after every shower.

This made an improvement in terms of result - the shower screen looked great on Sundays and decent for most of the week - but I was still putting more effort than I'd really like into maintaining the screen's appearance. And so I started trying to reframe the problem. Could we stop caring about cleanliness? If we could, the problem would simply go away! Yeah! But that suggestion wasn't well received by the bathroom project stakeholders.

So, could we stop the screen getting so dirty in the first place? I considered some options: a water filter, no screen (perhaps back to a curtain), a special screen (I didn't know what was possible), special soap (again, I didn't know what was possible), changing our behaviour (say, baths only), stopping the clag sticking to the screen, ...

The last of these seemed interesting because I could think of a way in which this was similar to a problem that I already understood. I have sometimes used an additive in our car's screenwash that makes water less likely to stick around on the windscreen, and wondered whether I could use it on the bathroom screen.

The cost of researching it - not least because I imagined I'd need to spend time working out what materials my shower was made of and checking for compatibility with any chemicals and so on - and the possible difficulty of application and the likely cost and the fear of getting it wrong and ruining the screen all conspired to put me off looking into it very eagerly. But the lack of desired returns from my other strategies meant that eventually I came back to it.

Making a cup of tea at work shortly afterwards, I was bellyaching to a colleague about the problem, and my idea of putting some additive into my cleaning water. And I got a lucky break! It turned out he had recently bought some stuff for coating his car windscreen which turned out to be suitable also for showers and so had applied it to both, and he was very happy with it.

Accepting his recommendation cut down my potential research effort, making the task more tractable in the short term. So I bought a bottle of the same product he'd used, read the instructions, watched the online videos about how to apply it, checked that I could unapply it, cleaned the screen very thoroughly, and finally applied it (a not insignificant investment in time). And I have been really pleased with the results. Ten or so weeks later, I'm still cleaning once a week but with the squeegee and the coating it's a much lighter clean and the screen looks good for the whole seven days.

Here's another example: initially I used washing up sponges for cleaning the bathroom but I found that they tended to leave tiny grains of their abrasive layer behind, which I then had to clean in some other way than with the sponge itself. So I started using an old washing up sponge (when the one from the kitchen needed replacing I would promote it to bathroom sponge) but that didn't have the scouring power I wanted. So then I wondered whether there were specialist bathroom-cleaning sponges - I know, so naive - and I now use one of them. But then I noticed that there was some accumulated soap stuff that the sponge struggled to remove, stuff that was hard to see against the white enamel of the bath until it had built up into a relatively thick layer.

I found that I could scratch this residue away with my nail and so I could generate a set of properties I might look for in a tool to do the same job better: flexible, strong, thin, easy to handle in confined spaces, non-damaging to enamel.

When confronted by a tool-selection problem with constraints, and without a solution, I find that going and poking around in my toolbox or my shed can be really helpful. I might not be able to imagine what kind of tool fits all of the requirements, but I can probably imagine which of those requirements might be satisfied by the tools, or potential tools that I can see in front of me.

I maintain several boxes of bits and pieces in the shed which are invaluable in fixing stuff and also for daughters' robot building projects:


Rummaging around in one of them, I came across an old credit card, which works perfectly as a scraper. As with the car windscreen coating, it turned out that others have been here before me but that doesn't negate the utility of the tool in my context, even if it does remind me that there were probably faster ways to this solution.

So, shiny bathrooms are great, and the journey to the symbiotic relationship of a suitable tool used in a suitable way to solve the right problem need not be linear or simple. But what here is relevant to work, and testing? Part 2 will begin to think about that.
Image: https://flic.kr/p/au9PCG

Footnotes:1. I do the washing up every day too! That's right, I'm a well-adjusted modern man, as well as handsome, young and with a full head of my own hair!
Categories: Blogs

DevOps Benefits for Software Testing

Software Testing Magazine - Mon, 08/29/2016 - 15:53
One of the main trends in software development is to deliver software more quickly. DevOps, continuous delivery or continuous integration are some of the approaches that have been promoted recently to achieve this goal. In their article “DevOps Advantages for Testing”, Gene Gotimer and Thomas Stiehm discusses the advantages that these approaches could provide to software testing and software quality. The article starts with a presentation of the different aspects of DevOps, continuous delivery and continuous integration. It then discusses the topics of code coverage, mutation testing and static analysis, explaining how they can be integrated in a continuous integration process. The article then presents the concept of delivery pipeline, defined as the process of taking a code change from a developer and getting it delivered to the customer or deployed into production. For each topic discussed, the article proposes some open source tools that can be used for this activity.The end of the article discusses how these concepts are applied in a case study. The conclusion of Gene Gotimer and Thomas Stiehm is that “The journey towards a continuous delivery practice relies heavily on quality tests to show if the software is (or is not) a viable candidate for production. But along with the increased reliance on testing, there are many opportunities for performing additional tests and additional types of tests to help build confidence in the software. By taking advantage of the automated tests and automated deployments, the quality of the software can be evaluated and verified more [...]
Categories: Communities

Testing Talk Interview Series – QualiTest

PractiTest - Mon, 08/29/2016 - 12:33

Testing Talks logo

qualitest logo

QualiTest designs and delivers contextualized software testing solutions that leverage deep industry-specific understanding with technology-specific competencies and unique testing-focused assets. QualiTest delivers results by combining customer-centric business models, critical thinking and the ability to gain a profound comprehension of customers’ goals and challenges.

1. Tell us a little bit about yourself. Try naming two interesting things not everyone knows about you.

My name is Yaron Kottler and I am the Executive Director of the US and UK regions at QualiTest.

I’ve been working at QualiTest for over fifteen years now. I started back in 2001 in Israel as a junior test engineer, and worked in various projects and industries. In 2004 QualiTest gave me the opportunity to start a new Business Unit, and in 2005 I started QualiTest’s first operations outside of Israel in Istanbul, Turkey.

Then, in 2006 my boss gave me the opportunity to reallocate to the United States, to establish QualiTest here. I was very honored and happy to accept his offer.

It was even nicer that I was told to open QualiTest in Connecticut, because even without the management team knowing it they were asking me to move next to my parents in-law who live here.

And so, in 2006 we reallocated to the US and acquired a small local company. Since then we’ve been growing constantly and now we have offices in Dallas,TX; San Diego, The Bronx, NY; and in two locations in California, San Francisco and San Diego. We serve customers all over the US and in various industries.

A couple of interesting things about me?

My first email address at QualiTest was djkottler@hotmail.com because previously I was a DJ – seriously!

The other one is going to be a little boring. My first area of expertise within the testing world was loading performance testing with tools such as LoadRunner, WebLoad and others.

Loading performance testing was my niche for a few years.

2. Can you tell us a little about your services? How are you different and what makes you great?

One of the things that makes QualiTest a unique place to work with is that we are a “pure play” testing company, fully focused on the field.

The second thing is that we have very deep industry expertise across multiple areas. We have deep knowledge in devices, healthcare, pharmaceutical and biotech, telecom and a few other verticals.

If you are a career tester with a specific industry expertise or a subject matter expert looking to make a career change QualiTest may be one of the best opportunities for you.

One more thing is that about 40 % of our resources for the the US market work at QualiTest’s Onshore Test Center. We have a strong operation offshore as well, and it is getting stronger, but still the vast majority of the resources in the US work through test centers.

3. If I am not using your solution today, what are the top three things I am missing out?

SLA based engagements and Managed Testing Services would be one layer.

The industry and technical expertise would be the second one.

The third one is that only career testers work at QualiTest. If you do QA testing to get a paycheck, it is not the place for you. But if QA testing is your passion, you will probably find the home with like-minded people.

4. Where do you see the testing and development ecosystems evolving in three or five years from now?

There are a lot of trends, everybody talks about Cloud, Mobile, IoT, Big Data and Analytics. There is a lot of demand on the space related to each of those trends and their combinations.

Automation continues to be the huge thing. In fact, up to a couple of years ago automation continued to be a nice-to-have area, but what we see is that during the last two years automation has become a must-have for the simple reason that time to market has become a huge business driver, and this will obviously continue expanding.

Organizations are investing more and more in innovation to reinvent business and go digital. Digital goes not only with innovation, but also with very quick innovation time to market. The only way to do that is by investing in automation.

From a skills perspectives it means you need to rewrite sort of talent that QualiTest calls ”All-Rounder”. A person who can design tests, implement tests whether manual or leveraged some sort of automation, understand the business context and risk. A person who can generate test assets or test coverage as much as possible through some sort of automation.

The post Testing Talk Interview Series – QualiTest appeared first on QA Intelligence.

Categories: Companies

Browser-testing with Sauce OnDemand and Pipeline

This is a guest post by Liam Newman, Technical Evangelist at Cloudbees. Testing web applications across multiple browsers on different platforms can be challenging even for smaller applications. With Jenkins and the Sauce OnDemand Plugin, you can wrangle that complexity by defining your Pipeline as Code. Pipeline ♥ UI Testing, Too I recently started looking for a way to do browser UI testing for an open-source JavaScript project to which I contribute. The project is targeted primarily at Node.js but we’re committed to maintaining browser-client compatibility as well. That means we should run tests on a matrix of browsers. Sauce Labs has an "open-sauce" program that provides free test instances to open-source projects. I...
Categories: Open Source

Enforcing Jenkins Best Practices

This is a guest post by Jenkins World speaker David Hinske, Release Engineer at Goodgame Studios. Hey there, my name is David Hinske and I work at Goodgame Studios (GGS), a game development company in Hamburg, Germany. As Release Engineer in a company with several development teams, it comes in handy using several Jenkins instances. While this approach works fine in our company and gives the developers a lot of freedom, we came across some long-term problems concerning maintenance and standards. These problems were mostly caused by misconfiguration or non-use of plugins. With “configuration as code” in mind, I took the approach to apply static code analysis with the...
Categories: Open Source

Reviewing the latest blinks August 28

thekua.com@work - Sun, 08/28/2016 - 17:50

Brain Rules: 12 Principles for Surviving and Thriving at Work, Home and School by John Medina – A description of rules with how our brain works and how we learn. Our visual senses tend to trump our sense of smell. We need sleep to restore our energy and to help us concentrate. Spaced repetition is important, but assigning meaning to new words and concepts are also important to learning. Since I’m fascinated with learning and how the brain works, I’ll add this to my reading list.

Getting Things Done: The Art of Stress-free Productivity by
David Allen
– Although I never read the book, I felt like I follow a similarly described organisation system. The GTD method is almost like a cult, but requires a lot of discipline for it. Unlike keeping a single list of things to do, they have a systemised variant for keeping long-lived projects and ways of managing tasks to help you focus on getting through actions. Probably a good book if you want to focus more on breaking things done into smaller steps.

The Checklist Manifesto: How to Get Things Right by Atul Gawande – With lots of examples from the healthcare industry, a reminder that useful checklists can help us avoid making simple mistakes. For me, the idea of standardised work (a lean concept) already covers this. I agree with this idea in principle, but I’m not so sure the book covers the negative side effects of checklists as well (people getting lazy) or alternatives to checklist (automation and designing against error/failure demand to be begin with).

Connect: The Secret LinkedIn Playbook to Generate Leads, Build Relationships, and Dramatically Increase Your Sales by Josh Turner – Either a terrible summary or a terrible book, this blink gave advice about how to use LinkedIn to build a community. Although the advice isn’t terrible, it’s not terribly new, and I didn’t really find any insights. I definitely won’t be getting a copy of this book.

Start With Why: How Great Leaders Inspire Everyone To Take Action by Simon Sinek – A nice summary of leadership styles and rather than focusing on how something should be done, and the what, is starting with the why. I liked the explanation of the Golden Circle with three concentric circles draw within each other, with the Why being the starting point that leads to the How that ends in the What. It’s a good reminder about effective delegation and how powerful the Why motivator can be. I’ve added this book to my reading list to.

Categories: Blogs

Perceived render time – You take the blue pill; you believe whatever you want!

Perception drives end-user experience.  Ryan Bateman and I came across a very interesting article by Bryan Gardiner in Wired Magazine describing some of the science around waiting for a page to load.  At Dynatrace we are REALLY passionate about this and got to chatting about it.  We constantly talk to customers about best practices when […]

The post Perceived render time – You take the blue pill; you believe whatever you want! appeared first on about:performance.

Categories: Companies

Summer travel and the need for great website performance!

HP LoadRunner and Performance Center Blog - Fri, 08/26/2016 - 20:24

Summer vacation teaser.png

The worst time for your website to go down is during a high demand period. Summer is one of the busiest travel seasons, and when a website is unavailable the impact is incredibly detrimental. Keep reading to find out how to improve your website performance.

Categories: Companies

Summer travel and the need for great website performance!

HP LoadRunner and Performance Center Blog - Fri, 08/26/2016 - 20:24

Summer vacation teaser.png

The worst time for your website to go down is during a high demand period. Summer is one of the busiest travel seasons, and when a website is unavailable the impact is incredibly detrimental. Keep reading to find out how to improve your website performance.

Categories: Companies

Jenkins World Speaker Highlight: Enforcing Jenkins Best Practices

This is a guest post by Jenkins World speaker David Hinske, Release Engineer at Goodgame Studios.

Hey there, my name is David Hinske and I work at Goodgame Studios (GGS), a game development company in Hamburg, Germany. As Release Engineer in a company with several development teams, using several Jenkins instances comes in handy. While this approach works fine in our company and gives the developers a lot of freedom, we came across some long-term problems concerning maintenance and standards. Problems, which where mostly caused by misconfiguration/non-usage of plugins. With “configuration as code” in mind, I took the approach to apply static code-analysis with the help of SonarQube, a platform that manages code quality, for all of our Jenkins-Job-configurations.

As a small centralized team, we were looking for an easy way to control the health of our growing Jenkins infrastructure. With considering “configuration as code”, I developed a simple extension of SonarQube to manage the quality and usage of all spawned Jenkins-instances. The given SonarQube features (like customized rules/metrics, quality profiles and dashboards) allow us and the development-teams to analyze and measure the quality of all created jobs in our company. Even though a Jenkins configuration analysis can not cover all SonarQube’s axes of code quality, I think there is still potential for conventions/standards, duplications, complexity, potential bugs (misconfiguration) and design and architecture.

The results of this analysis can be used by all people involved in working with Jenkins. To achieve this, I developed a simple extension of SonarQube, containing everything which is needed to hook up our SonarQube with our Jenkins environment. The implementation contains a new basic-language “Jenkins” and an initial set of rules were defined.

Of course the needs depend strongly on the way Jenkins is being used, so not every rule implemented will be useful for every team, but this applies as well as all other code-analysis. The main inspiration for the rules were developer feedback and some articles found on the web. The different possibilities to use and configure Jenkins provides a lot of potential for many more rules. With this new approach of quality-analysis, we can enforce best practices like:

  • Polling must die (Trigger a build due to pushes instead of poll the repository every x minutes)
  • Use Log Rotator (Not using log rotator can result in disk space problems on the master)
  • Use slaves/labels (Jobs should be defined where to run)
  • Don’t build on the master (In larger systems, don’t build on the master)
  • Enforce plugin usage (For example: Timestamp, Mask-Passwords)
  • Naming sanity (Limit project names to a sane (e.g. alphanumeric) character set)
  • Analyze Groovy Scripts (For example: Prevent System.exit(0) in System Groovy Scripts)

Besides taking control over all configuration of any Jenkins instance we want, there is also room for additional metrics, like measuring the amount and different types of jobs (Freestyle/Maven etc…) to get an overview about the general load of the Jenkins instance. A more sophisticated idea is to measure complexity of jobs and even pipelines. As code, job configuration gets harder to understand as more steps are involved. On the one hand, scripts, conditions and many parameters can negatively influence the readability, especially if you have external dependencies (like scripts) in different locations. On the other hand, pipelines can also grow very complex when many jobs are involved and chained for execution. It will be very interesting for us to see where and why complex pipelines are being created.

For visualization we rely on the data and its interpretation of SonarQube, which offers a big bandwidth of widgets. Everybody can use and customize the dashboards. Our centralized team for example has a separate dashboard where we can get a quick overview over all instances.

The problem of “growing” Jenkins with maintenance problems is not new. Especially when you have many developers involved, including with the access to create jobs and pipelines themselves, an analysis like this SonarQube plugin provides can be useful for anyone who wants to keep their Jenkins in shape. Customization and standards are playing a big role in this scenario. This talk surely is not an advertisement for my developed plugin, it is more about the crazy idea of using static code analysis for Jenkins job configuration. I haven’t seen anything like it so far and I feel that there might be some potential behind this idea.

Join me at my Enforcing Jenkins Best Practices session at the 2016 Jenkins World to hear more!

David Hinske
Release Engineer
 Goodgame Studios

This is a guest post written by Jenkins World 2016 speaker David Hinske. Leading up to the event, there will be many more blog posts from speakers giving you a sneak peak of their upcoming presentations. Like what you see? Register for Jenkins World! For 20% off, use the code JWHINMAN

 

Blog Categories: Jenkins
Categories: Companies

Ask the Experts at Jenkins World 2016

Our events officer Alyssa has been working for the past several weeks to organize the "Open Source Hub" at Jenkins World 2016. The Hub is a location on the expo floor where contributors to the Jenkins project can hang out, share demos and help Jenkins users via the "Ask the Experts" program. Thus far we have a great list of experts who have volunteered to help staff the booth, which includes many frequent contributors, JAM organizers and board members. A few of the friendly folks you will see at Jenkins World are: Paul Allen - P4 Plugin maintainer and Pipeline contributor. R Tyler Croy - Jenkins infrastructure maintainer and board member. Jesse Glick - Pipeline maintainer...
Categories: Open Source

Because you can’t always blame network operations…or the network!

What’s your most memorable “blame the network” anecdote? If you’re in network operations, you will likely have many from which to to choose. After all, doesn’t the network always get blamed first? To be fair, other teams often feel the same. Citrix and VMware admins have alternately been touted as “the new network guy,” and as […]

The post Because you can’t always blame network operations…or the network! appeared first on about:performance.

Categories: Companies

The Sauce Journey – Shu Ha Ri

Sauce Labs - Thu, 08/25/2016 - 15:30

If you’re attempting to implement an Agile/Scrum development process where none has existed before, you will surely an encounter a moment of frustration on the part of your developers. “Why do we have to do these standups?” “I don’t understand why we need to assign story points, can’t we just get to the projects?” “Where is my technical specification?” Like Ralph Macchio in The Karate Kid, your developers may wonder why you have them doing the engineering equivalent of “wax on, wax off,” when what they really want to do is get into the fight. What Ralph Macchio eventually understands is that the performance of rote, rigid external exercises is a first step on the road to internal mastery, a process well known in the world of martial arts as Shu Ha Ri.

In its broader definitions, Shu Ha Ri describes a process of learning: in the Shu stage, the learner follows directions literally and adheres rigidly to whatever rules the teacher has set. In the Ha stage, the learner begins to see how the rules and directions can be adapted for specific situations, and exercises some judgement in how they should be applied. In the Ri stage, the learner has developed her own techniques, and now innovates freely as the situation demands.

Martin Fowler and Alastair Cockburn have written about the role of Shu Ha Ri as it applies to Agile development, but we could characterize the three stages as rigid adherence to the principles and ceremonies of Scrum, followed by what I like to call “pragmatic Scrum” that adapts to the styles and situations of individual teams, which then culminates in true Agility in approaching projects, challenges, and the process itself. The most important thing to take away from the application of Shu Ha Ri to software development, however, is that it is about the internalization of principles, followed by an understanding of their application, which leads finally to innovation in how problems and projects are approached. This is in sharp contrast to other methodologies, like traditional waterfall, that are simply about the imposition of schedules and rules that leave teams stuck in an eternal Shi limbo. It’s difficult to imagine that these teams would experience much satisfaction with their position, much less be capable of innovation.

It’s now been six months since we adopted Scrum at Sauce Labs. We’ve had our Shu period, and, as expected, it was a difficult time. As we implemented Scrum, there were many moments of frustration, questions about why, and some resistance to what were perceived as pointless rituals. It didn’t take long, though, before we had moved into pragmatic Scrum. The teams began to better understand their own abilities, and how to incorporate and adapt Scrum practices to the way they work together. And now, I’m guardedly optimistic that we are entering into Ri, as evidenced by the project to open our data center in Las Vegas. This was a true DevOps project, in that there was no easy separation between development requirements and operational requirements, and it required the cooperative efforts of many teams to accomplish. It also required that teams who had adopted and adapted Scrum learn how to make their particular version fit in with that of their colleagues – they had to take what they had learned, in other words, and improvise upon it. Had they not been able to do this, I have no doubt that we would never have been able to accomplish this monumental task, that had so many dependencies and inter-dependencies. In any traditional project management approach, we would no doubt still be writing the specifications, rather than delivering significantly improved performance to our customers. To paraphrase Mr. Miyagi, first we had to learn how to “stand up,” then we learned how to fly.

Joe Alfaro is VP of Engineering at Sauce Labs. This is the sixth post in a series dedicated to chronicling our journey to transform Sauce Labs from Engineering to DevOps. Read the first post here.

Categories: Companies

RanoreXPath – Tips and Tricks

Ranorex - Thu, 08/25/2016 - 15:00

The RanoreXPath is a powerful identifier of UI elements for desktop, web and mobile applications and is derived from the XPath query language. In this blog we will show you a few tips & tricks on how to best use the various RanoreXPath operators to uniquely identify UI elements. You can then use these RanoreXPaths in your recording and code modules to make your automated tests more robust.

Overview

Using RanoreXPath operators

Element Browser

The Ranorex Spy displays the UI as hierarchical representation of elements in the Element Browser view. The RanoreXPath can be used to search and identify items in this UI hierarchy.

In this example, we’ll use the tool KeePass as application under test (AUT). This open source password manager application is one of our sample applications delivered with Ranorex Studio. If you have multiple applications open, Ranorex Spy will list them all. Filtering the application you want to test will increase speed and give you a better overview. To do so, track the application node of KeePass and set it as root node (context menu > ‘Set Element as Root’). Now, only the main KeePass form and its underlying elements are visible.

Ranorex Spy

General Layout of RanoreXPath

RanoreXPath expressions are similar to XPath expressions. They share both syntax and logical behavior. A RanoreXPath always consists of adapters, attributes and values:

Layout of RanoreXPath

The adapter specifies the type or application of the UI element. The attribute and values specify adapter properties.

The absolute RanoreXPath of our KeePass form looks like this:

RanoreXPath of AUT

The form is an adapter specifying the type or classification of the UI element. It is followed by the attribute value comparison, which identifies the requested element. In this example, the comparison operator is a simple equality.

If you want to know more about how the RanoreXPath works, we recommend our dedicated user guide section.

Search for multiple button elements

You can list all buttons elements that are direct children of a designated positon in your AUT. Have a look at these two examples:

1. List all buttons that are direct children of the KeePass toolbar:

To do so, simply set the toolbar as root node and type ./button into the RanoreXPath edit field, directly after the given RanoreXPath.

Relative path to all child buttons

This will create a relative path to all child nodes of the actual node, which are buttons.

Element Tree with Child Buttons

2. List all buttons of your AUT:

Navigate back to the form adapter, set it as root node and type in .//button.

Relative path to all buttons

You’ve now created a relative path to all descendants of the actual node, which are buttons. These are all buttons of all levels of the subtree of the current element.

Element Tree with all Buttons

Identify controls with a specific attribute

You can also create a path to controls, to filter them according to specific attributes. In this example, we want to find all checked checkboxes.

Open the “Find” dialog in KeePass (<CTRL><F>), as this dialog contains checkboxes, and set it as root node. Now, you can validate which item of the checkbox control has the attribute “checked” set to true. To do so, enter “//checkbox[@checked=’True’]”:

RanoreXPah for Checked Checkboxes

As you can see, only the checked checkboxes will be visible in the Element Browser.

Element Tree with all checked checkboxes

Identify checkboxes by combining attributes

You can further extend the previous example by combining attributes. This enables you to, for example, omit certain items from the search, or search for specific items.

1. Omit a specific item from the search

You can omit a specific item from the search using the “not equal” operator and the “and” conjunction. In this case, we want to omit the item “&Title”:

RanoreXPah for Checked Checkboxes excluding Title

Element Tree with all checked checkboxes excluding Title

2. Seach for specific items

You can use the “or” instead of the “and” conjunction to extend your search and only look for specific items. Extend the checkbox search to look for the items “&Title” and “&URL”:

RanoreXPah for Checkboxes

Element Tree with checkboxes

Recognize related elements using the parent operator

After running the Ranorex desktop sample project, there will be two entries in our AUT – one for a WordPress and one for a Gmail account. In this case, we’d like to find the username of the “Gmail” KeePass entry:

RanoreXPah for username of gmail entry

Element Tree username of gmail entry

Username of gmail entry

Start with the RanoreXPath to the cell containing the text “Gmail” (framed in red). Next, use the relationship operator “parent” to reference the parent node of the current element. In this example, it’s a row (framed in blue). The index “[2]” navigates to the second cell, which contains the Gmail username (framed in green).

Recognize related elements by using preceding- and following-sibling

Another way to search for related elements is to use the relationship operator “preceding-sibling”. In this example, we want to find the title of a KeePass entry based on its username.

Relationship operators

The command “preceding-sibling::cell” lists all preceding cells. In this case, the result is the title (framed in green) which corresponds to the given username (framed in red).

RanoreXPath Preceding Sibling

Element Browser Preceding Sibling

In contrast, the command “following-sibling::cell” delivers all following cells. In our case, these are all following cells (framed in blue) that correspond to the given username (framed in red).

RanoreXPath Following Sibling

Element Browser Following Sibling

Identify attributes fields using regular expressions

You can also use regular expressions in attribute conditions to identify attribute fields. In this example, we’d like to filter cell adapters that contain an email address in their text attribute. Regular expressions matching an email address may look like this: “.+@.+\..+’”.

RanoreXPath Regular Expression

Element Browser Regular Expression

The “~” operator instructs Ranorex to filter attribute fields using a regular expression. The “.” in our regular expression matches every single character, while the “+” specifies that the preceding element has to occur one or more times. To escape special characters (such as “.”), enter a backlash before the character.

In our example, every expression will match that contains the character “@” with one or more characters before and after it, followed by a “.”, which is followed by one or more characters.

For more examples on how to use regular expressions in RanoreXPaths, please have a look at this user guide section: RanoreXPath with regular expression.

Identify attributes with dynamic values

Dynamic attribute values change each time an element is displayed anew. Fortunately, dynamically generated content usually has a prefix or postfix. To identify dynamic elements, you can either use regular expressions, as described above, or use the ‘starts with’ or the ‘ends with’ comparison operators:

  • ‘>’: The value of the attribute must start with the given string
  • ‘<‘: The value of the attribute must end with the given string
Conclusion

The RanoreXPath enables you to find and uniquely identify every single UI element of desktop, web and mobile applications. You can use the RanoreXPath operators to make your test suite more robust and identify even dynamic attribute values.

RanoreXPath Overview  RanoreXPath Editor 

The post RanoreXPath – Tips and Tricks appeared first on Ranorex Blog.

Categories: Companies

Now On DevOps Radio: Leading the Transformation, Live - with Gary Gruver!

Achieving a DevOps transformation is much easier said than done. You don’t just flip a switch and “do” DevOps. It’s also not about buying DevOps tools.

Don’t you wish you could just sit down and talk with someone who’s done it all before? You can! This week, we’re excited to share that Gary Gruver, author and Jenkins World 2016 keynote, joined us on DevOps Radio to talk about leading a DevOps transformation. So plug in your headphones, shut your office door and get comfortable: You’re going to want to hear this!

For those of you who don’t know Gary, he’s co-author of A Practical Approach to Large-Scale Agile Development, a book in which he documents how HP revolutionized software development while he was there, as director of the LaserJet firmware development lab. He’s also the author of Leading the Transformation, an executive guide to transforming software development processes in large organizations. His impressive experience doesn’t stop at author and director at HP, though. As Macys.com’s VP of quality engineering, release and operations, he led the retailer’s transition to continuous delivery.

In this episode of DevOps Radio, Gary and DevOps Radio host Andre Pino dive into the topics covered in Gary’s two books. They talk through the reality of leading a transformation, discussing practical steps that Gary took. They also bring up challenges — ones that Gary faced, and ones that you might, too.

So, what are you waiting for?! Tune in to the latest episode of DevOps Radio. It’s available now on the CloudBees website and on iTunes. Join the conversation on Twitter by tweeting out to @CloudBees and including #DevOpsRadio in your post!

You can meet Gary in person at Jenkins World and hear his conference keynote. Register now! Use promotion code JWHINMAN and you’ll get a 20% discount. Meanwhile, learn more about Gary on his website.

 

Blog Categories: JenkinsCompany News
Categories: Companies

Testing Talk Interview Series – Perfecto Mobile

PractiTest - Wed, 08/24/2016 - 16:19

Testing Talks logo

Perfecto Mobile logo

Perfecto Mobile, the world’s leader in mobile app quality, provides a hybrid cloud-based Continuous Quality Lab that enables mobile app development and testing teams to deliver better apps faster. The Continuous Quality Lab supports testing processes earlier and more often in the development cycle, giving way to faster feedback and improved time to market.

1. Tell us about yourself. Please share a couple of interesting things not everyone knows about you.

My name is Eran Kinsbruner and I am the Mobile Technical Evangelist at Perfecto. I’ve been in the company for the last four years, and in the quality Industry since 1999.

I started to work in the hardware and software industry, then moved to Sun Microsystems as a senior QA manager for seven years. After that I managed verification and validation activities at NeuStar and General Electric. My next position was that of chief technology officer (CTO) at Matrix. And eventually I moved to Perfecto where I am working as a technical evangelist especially in the mobile field, but also in other areas such as web testing.

Mobile testing is one of the most interesting areas of focus for me, and I am constantly blogging about it in personal Blog. I’ve recently moved from Israel to Boston with my family and 2 cats and am enjoying the life in the U.S. You can talk to me on Twitter at @ek121268.

What does an Evangelist do for a living?

Actually a lot of things, but from a macro view I engage and track the market. In my specific case I track the mobile space (thanks to my previous experience I am very familiar with mobile trends) and apply these trends to quality practices.

I also speak frequently at different events, usually about mobile and web quality, on behalf of Perfecto. I contribute a lot of white papers and blogs, host webinars, and spend a lot of time working on product strategy. It’s always changing and always challenging, which is why I love it.

Two interesting things…

I have a twin brother, Lior, who not only looks just like me but he also works in the Quality Assurance industry. So I have a replacement if needed 

The second thing is that I hold a patent, registered in the US under my name, currently working to apply this patent at Perfecto on the new digital space of Mobile and Web.

2. Can you tell us a little about your company and the Products that you are creating? Tell us more about the “DNA” of Perfecto. What makes your company the great place to work in?

Perfecto serves enterprise customers all around the world, mostly in the US. We have customers from many Fortune 500 companies who have huge demands and expect to get high quality services, so they can in turn deliver seamless experiences to their customers across the web and mobile channels.

The solution enables enterprises to create high-quality apps and sites, and is comprised of two components:

Firstly, a cloud platform containing thousands of real devices (phones, tablets, smart watches, etc.) connected to real live networks and desktop browsers for testing apps and web sites.

Secondly, a variety of testing tools allow users to perform manual and automated tests, as well as performance testing and monitoring, on web, responsive web and mobile apps.

Combined, customers are able to deliver the highest-quality experiences across digital channels, delighting customers and driving business.

Another major piece that sets us apart is our integration portfolio. Perfecto recognizes a major shift to open source tooling, and offers both a RESTful API and integrations with leading open source tools such as Selenium, Appium, Espresso, Calabash, Cucumber and Jenkins. In addition, Perfecto integrates with tools and IDEs from Microsoft, HP, IBM and CA.

As for our history, Perfecto has been in the field of Mobile and Web quality for years, and is still innovating and moving fast. (In fact, Perfecto was just named a Leader in Forrester’s Wave for Front-End Mobile Testing Tools, 2016). So working for a recognized leader in the industry has definitely made things here exciting. And since we practice Agile in development, we’re able to adapt and innovate quickly, keeping up with changes in the digital market.

In fact, if you compare Perfecto’s innovations with other vendors, you will see we release much faster than others. Our belief is that web and mobile are not only dynamic, but fast paced industries, so we must perform this way as well. I’ve found that if an organization is not following the web and mobile trends, it quickly becomes irrelevant to the market and does not deliver the product according to customers’ requirements.

3. How do you see the testing and development ecosystems evolving in 3 or 5 years from now?

I see that test teams are becoming feature teams; it is a movement happening within the last 1-2 years. The importance of quality is at an all-time high as digital experiences are working their way into our lives like we could only have imagined years ago (smart homes, smart cars, reliance on mobile devices). With more mature teams, I’m seeing dev teams take on more responsibly when it comes to quality in order to address shortened release cycles and higher quality demands. This also imposes a challenge (and opportunity) for testers to grow their technical skills and experience newer tools such as open-source and others and transform into DevTest roles.

The easiest way to be first to market with many digital cases is to reduce velocity but not to harm the quality of a website or app. What I see for the next 1-2 years is that while Agile is still growing, it is likely to become the de-facto approach for the digital development life cycle. Developers will choose more open source tools, as they tend to meet their needs of improving velocity without impacting quality.

Also in the app space, everything will be much more connected. Everything will tie back seamlessly together as user experience, technology and processes are more user experience and innovation focused.

The post Testing Talk Interview Series – Perfecto Mobile appeared first on QA Intelligence.

Categories: Companies

Automated Optimization with Dynatrace AppMon & UEM

Most Application Performance Management (APM) tools became really good at detecting failing or very slow transaction, and giving you system monitoring and code level metrics to identify root cause within minutes. While Dynatrace AppMon & UEM has been doing this for years, we took it to the next level by automatically detecting common architectural, performance […]

The post Automated Optimization with Dynatrace AppMon & UEM appeared first on about:performance.

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today