Skip to content

Feed aggregator

Trying to be CEWT

Hiccupps - James Thomas - Sun, 07/05/2015 - 08:03

I attend, enjoy, hopefully contribute to, and get a lot from, the local tester meetups and Lean Coffee in Cambridge. But I'd had the thought kicking around for a long time that I'd like to try a peer workshop inspired by MEWT, DEWTLEWT and the like. I finally asked a few others, including the local meetup organisers, and got mostly positive noises, so I decided to give it a go.

I wrote a short statement to frame the idea, based on LEWT's:
CEWT (Cambirdge Exploratory Workshop on Testing) is an exploratory peer workshop. We take the view that discussions are more interesting than lectures. We enjoy diverse ideas, and limit some activities in order to work with more ideas.and proposed a mission for an initial attempt to validate it locally on a small scale.

Other local testers helped to refine the details in usual the testing ways - you know: criticism, questions, thought experiments, challenges, comparisons, mockery and the rest - and a list of potential attendees was drawn up. In parallel I solicited advice from the groups that had inspired me, asking what's worked well and what what hasn't, particularly in the events and in the organisation of them.

This post aggregates and roughly sorts their responses, removing mentions of specific groups or people. I'd like to thank all of them for being so forthcoming and open with their experience and advice.

I wanted to pull two specific comments out, two that I tried to keep uppermost in my mind thoughout:
  • As you will understand: there is no best practice.
  • The thought is this: at a peer workshop, I should consider everyone my peer. For the duration of the workshop, I will attempt to listen to – and question – anyone who I share the room with, regardless of whether they have more or less experience, or whether I generally consider their work good or poor, whether I am fascinated, bored, repelled, awestruck or confused. 

I started this process at the end of April and yesterday (July 4th) we had CEWT #1. There were a few rough edges, and I learnt a thing or two, and I already know some things I would change if and when we have another, but there'll more on that later. For now, here's that aggregated advice for anyone else thinking of trying it ...

StartingWe started small: in a kitchen with only a few people.

I have no idea how many interested people you know, but it is smart to keep it either very small to start with, which you can organise by yourself. Or make it a bit bigger, but then you should have some help.

I’d thought about doing this for about 12 months before our first one, and it was only when I started to talk to the others about the idea that I found they had similar thoughts and things started to move.

SizeMy experience is that you need about 10 people to have good discussions in LAWST style. 7-8 people could be okay, although I don't think you need facilitation with such a small group. You also have the risk that if 1 or 2 do not show up, your group becomes even smaller.

We have limited it to a maximum of around 25 people. As we are always looking to improve, this all might be subject to change in the near future.

I had assumed [the sense that in a peer conference everyone is granted the status of everyone else's peer] was a central guidance to peer conferences – even if, in practice, it was occasionally hard to see such respect in action. However, I’m no longer certain of this; when I’ve shared my position with other peer conference organisers, it has been (generally) either alien or less important. I think this gets hard with >8 people, and is pretty impossible with >15. A 25-person room will naturally form groups, gurus, acolytes and pariahs – so it’s ludicrous of me to expect larger peer conferences to work this way.

Personally, I think the max size for any peer group is rather under 20.

AttendeesWe have a very simple approach to application and invitation - if someone asks if they can come, then they can. Done. I tell people that there's a cutoff, what the cutoff is, and that people who apply when numbers are under the cutoff can come, and people who are later can't come.

Currently, I ask prior participants to set the theme and the date, so they know before anyone else. This gives them precedence, but if they don't take the opportunity, they don't get to go.

Wrong people: who am I to judge? However, if someone applies out of the blue, I'll talk with them so that they can judge if they're the right person. Usually their judgement is sound.

If someone's interested enough to ask to come and to give up their time to be part of it, then they're in - whether they 'fit' the group, or not. We have had people who didn't fit, and sometimes they've been wonderful contributors, sometimes they've triggered good conversations and interesting realignments. No one has walked out yet. A few participants have complained about others, and I can deal with that as facilitator if something is said early enough. I sometimes find my own comfort challenged – but I don't think it's my role to exclude someone, and I'm sure that the group is muscular enough to chew someone up and spit them out if it absolutely has to.

We are thinking of adding the possibility to choose one speaker chosen by the participants.

All organisers can introduce one (sometimes two) others to the peer conference. We often try to invite somebody outside of the testing circle to add some other views to our conferences.

If you are inviting people, then invite people you think will have something interesting to say on the topic rather than people you know or feel you need to invite out of loyalty – remember it’s a firstly a learning opportunity not a social gathering.

Even if you don’t know someone well but want to invite them, don’t be afraid to reach out and ask them – most people like to be invited to these things.

I find that the more diverse the group, the more it offers guarded respect to each individual: our two-people-with-less-than-two-years-experience thing helps with the diversity.

The Organising TeamWe are organized as a small core group with assigned roles - which rotate per event - to some of us to organize the peer conference.

A small team will help give the idea some momentum, generate more interesting ideas and share the effort of creating the event.

Play to people's strengths - we are all very different with unique skills and personalities, but we each bring something to the table.

If you have a team then agree roles (we change roles each time) to ensure things get done. Generally you will need:

  • 1 x Content Owner – responsible for describing the theme, reviewing and feedback on abstracts, ensuring all attendees have an abstract.
  • 1 x Facilitator – responsible for managing the flow of the discussions on the day (doesn’t need to speak)
  • 1-2 x Organisers – responsible for logistics (venue arrangements, ensuring costs are covered either by sponsorship or attendees, providing travel and hotel information, keeping in touch with attendees etc.).
We have introduced the formal role of 'content owner' in the conferences to keep us from going all over the place. He/she chooses the speakers. The conferences are centred around experience reports and discussions are facilitated by a facilitator.
Find some awesome people to work with, it's a lot of work for one person!
Logistics: BeforeChoose relevant and open topics that encourage a wider range of views and discussions.

Find a good venue.

Food is important - quality grub adds to the vibe.

All participants are obliged to send in a proposal for a small presentation (organisers too).

Asking for abstracts (and receiving them) helps to focus people's minds ahead of the day.

Chase people for abstracts, review and feedback on the abstracts. In my opinion, if you don’t have abstracts then some attendees will forget to prepare and attempt to wing-it resulting in less interesting talks and discussions. However that does depend upon who the attendees are.

Don't underestimate the effort required to invite people or encourage people to attend (if you have an open attendance). You will have people who drop out in the lead up to the event so be prepared.

Plan ahead, we have started planning 3-4 months ahead to give people time to commit and provide abstracts. When you invite or accept people to attend, ensure they know the outline plan with milestones such as confirming attendance, when initial and final abstracts are due etc.

Keep regular contact with those who are attending to keep them informed of plans, reminders of upcoming milestones, hotel and travel arrangements etc.

Logistics: On the DayIf you can, find someone to do the distracting mid-workshop logistics (i.e. who’s eating what, taking calls from late people).

Trying to get through all of the talks works well - fast paced and high energy.

Not worrying about getting through all the talks works well too - slower and deeper.

Breaks: as long as possible without losing momentum and direction. Proper, multithreaded conversation happens in the breaks. The “talks” are a primer for the discussions, the discussions a primer for conversations – and connections and ideas grow from those conversations.

Set-up: everyone should be able to see everyone else’s face, all the time. Other than that, don’t be precious about room layout, drinks, stationery, power supplies, matching tables or any other fripperies. Indeed, the more informal, the better. Help participants to feel comfortable, not coddled, and certainly not privileged.

Visuals: I strongly discourage slides, and encourage flip charts. They’re more immediate, more interactive, and less goes wrong. I prefer flipcharts to whiteboards, as they’re more permanent and one can flip back.

Dot voting lean coffee style gets everyone involved.

Keep presentations nice and short; 15 minutes max.

Ordering: the room gets to decide what goes early (the facilitator gets a deciding vote) – so topics at the end usually get less time. This can make them more focussed, and the speaker will often be able to tune what they have so that it suits the attention of the room.

We don't have a content owner deciding what gets attention or priority, we don't have a scribe making public notes, we don't have a mission. We all agree at the outset to be facilitated, which helps - but we don't necessarily decide what 'facilitation' is.

The relatively-fast turnover of topics helps, a lot.

FacilitationAsk the room to accept you as someone who will regulate the ebb and flow. Don’t direct (or dictate) the content.

Accept that, as facilitator, you’re not really at the workshop, and give the primary part of your attention to emotions of the people in the room, not to what is being said.

Monitoring people's energy and staying fluid with structure and content helps keep things moving.

When I'm facilitating, I try to do the job with as light a touch as possible - basically I keep a queue, keep my eye on time, and try to help the group stay within the discipline of conversing in a way that lets everyone talk, and everyone listen. Even that, however, requires my complete attention on the room - which means I don't make many notes for myself or contribute much to the conversation.

The facilitator is not a peer. The participants give the facilitator their attention, and their permission to stop and start them, in pursuit of a greater goal then their own individual airtime. The facilitator accepts their temporary status, and returns the favour by serving the group and putting his or her own needs aside.

Name cards can help your own flow.

Getting everyone’s attention focussed from chat to the group: There are clutch of approaches. Most work, most use sound or visual cues. I pick up whatever (physical) sound effect I’ve not used recently. Singing bowls, thundersticks, jingle toys. It gets to a point where, when everyone’s concentrating, one has only to pick the thing up to make people switch focus. My favourite was the vuvuzela – a disgustingly loud football horn. I don’t remember blowing it at all (except to try it out).

For each new topic, I try to remember to announce the topic and speaker, ask how much time they want to talk, support them no more than they want, and to ask the room to thank them at the end.

As someone starts their topic, I split the audio recording and also write down the start time, the time the speaker’s asked me to give them, and the time we’ve all agreed to spend on the topic. I write those as absolutes, not relatives, because calculation takes your attention – (ie 10:03:15, 10:13, 10:33). My laptop clock is always in view.

I record audio, and this also keeps track of elapsed relative time (i.e. 0:17:30 since the topic started).

I keep track of the timing info and the current queue on the same topic card that I’ve pulled off the wall – the card that started with a topic title and ended up covered in sticky dots. Keeping track of the question stack/queue is easy – it’s a list, sometimes with indents and squiggles. If sub-topics are spawning more sub-topics, do ask the room if they want to go deep or wide.

Allow the clock to rule, allow the room to override the clock. Don’t worry about going short. The room will need to regularly be reminded of the time available as the stack builds up and time burns down.

Every few questions, I’ll tell the room who the next 2-4 people on the stack. If we’re in open discussion, and I feel the room needs to move on, I’ll catch the eye of whoever is speaking, breathe in as they finish a point, and indicate the next question by pointing to someone and saying their name.

Don’t fear dropping a person from the queue – it’s your job. But don’t drop them slyly, either.

I bite my tongue (metaphorically, mostly) to stop (my) witty interjections; they’re not usually that great, and it’s an abuse of the role the room has allowed me to take. For the same reason, I don’t usually ask many questions – but I don’t absolutely exclude myself, either.

If, as time runs out on a topic, you give participants the chance to pull their questions or comments to let other questions be asked, they might just do it.

As a facilitator, the people who give me problems are those who assume their contribution is more important than the person who currently has the room's attention, the people with one thing to say and a big personal stake in having it heard, and people who stop listening after someone uses a word that is hot (or dull) for them.

I'm sometimes a problem if I get involved, and I'm lucky that people help me rein myself in if I get out of hand. But problems are few and often easy to deal with if one has a feel for the tolerance and firmness that suits the mood of the room (the whole group, not just the loud participants).

If everyone speaks at once, I need to decide when and how and whether to stop them – and if people only speak when their feel they have permission to speak, I’ve done it all wrong and need to shake up the room. Stay between these extremes, let people (including yourself) be human, aim for fine chat, and you’ll have done a job that anyone should be satisfied with.

I find that expression and body position will tell you whether someone has a new point or a follow-on (and if not, just ask), so I think that k-cards in something with <20 people are a constraining gadget.

I don’t tend to give much leeway to an extended back-and-forth between speaker and a single interlocutor.

Discourage bad behaviour more than the person who is behaving badly: Firmly and clearly block people who are being bullies, then swiftly forgive them and allow them a chance to redeem themselves in the eyes of their peers.

See for general ideas: Paul Holland on facilitation.

Success or failure (pick your own definition) is mostly down to the group, not the facilitator – but you are, as Jerry Weinberg might say, responsible for your reactions to the group.
Image: https://flic.kr/p/4F24G7
Categories: Blogs

Best Practices for Pre-Coverage Filters

NCover - Code Coverage for .NET Developers - Fri, 07/03/2015 - 13:14

pre_coverage_filtersNCover is designed to easily collect code coverage on the build server, across multiple machines and across entire development or QA teams. These deployment options, combined with NCover’s ability to collect coverage regardless of testing method, including manual and automated tests, provides the industry’s most comprehensive solution for complete .NET code coverage.

Although NCover has been optimized to handle large volumes of coverage data, many users choose to focus coverage on specific sections of code during development and testing cycles. There are many reasons for this, including the desire to reduce system and resource utilization, the need to reduce cycle time by focusing on code known to have changed, or the need to align the coverage process with organizational objectives.

One of the best ways to focus the collection of coverage is through the use of pre-coverage filters.

Why Use Pre-Coverage Filters?

Pre-coverage filters prevent code profiling and data collection for items specified in the pre-coverage filter. By preventing unnecessary profiling and data collection, pre-coverage filters can reduce, sometimes significantly, the amount of work required by NCover to profile an application.

Best Practices For Pre-Coverage Filters

NCover’s pre-coverage filters allow for a wide-range of usage scenarios.  However, there are several best practices to ensure that your pre-coverage filters both achieve your desired result and are used in the most efficient way possible.

Focus On The Module

When using the pre-coverage filters Include and Exclude, we recommend focusing those rules at the Module level. If you want to exclude more specific parts of the underlying classes, we recommend using post-coverage filters. An exception to this rule is if you want to exclude generic patterns like .ctor or .cctor.

We do not recommend the use of include filters for namespaces, classes, and methods in the pre-coverage filter for a project. There are three primary drawbacks:

  • Filters on namespace, class, and method apply to all modules and so do not eliminate modules, they only filter the contents during collection. Ultimately, this does not save time or space.
  • Collateral classes with coverage help to reinforce successful coverage collection on new classes. If a new class shows as uncovered, but an old class shows as covered, then the user has more reason to believe that further testing is needed, rather than suspect the coverage was somehow dropped in error.
  • Saving a post-coverage filter by build-id or version allows you to continue to revisit the coverage of previous test runs without losing the ability to trend the coverage of a module across time.

In general, you want to develop pre-coverage filters that create the shortest possible list of filters. In addition to improving processing time, it removes potential confusion from the use of overly complicated rules. You should also consider using Regex rules, where appropriate.

Include or Exclude

Exclude filters are fairly straightforward. Collection will be captured for everything except what has been specifically excluded. This approach ensures that any new assemblies will be captured and included in the coverage analysis.

Include filters, on the other hand, require that each module be specifically named to be included in the coverage. This approach offers benefits in large systems that load a large number of DLLs. However, it is important to remember that any completely new assemblies will need to be added manually to the filter list.

Follow the Flow

When both Include and Exclude pre-coverage filters are used, Includes are applied first and then Excludes are applied, or subtracted from, what was originally included. This provides the option to target very specific areas of code, but it is important to remember that this logic will be applied in this order regardless of the order in which the pre-coverage filters are created.

Also, when using both Include and Exclude pre-coverage filters, you want to eliminate filters that duplicate effort. For instance, if you Include Module A, you do not need to Exclude Module B. Module B is already excluded by definition when the filter logic is applied.

Test Your Filters

Just as you should always test your code before you deploy it, you should always test your coverage collection before your begin a testing cycle. We recommend running a limited test of your coverage settings and inspecting the data collected by NCover. This is true whether you are running only NCover Code Central on your build server or you are collecting coverage from a variety of machines. It’s very frustrating to run through an entire testing cycle only to discover that your filters excluded key Modules or included large portions of code that you did not want in your analysis.

You can easily get Module summary data by using either the Summarize command or the Report command. Either of these commands can help verify that the correct modules are getting loaded and covered.
If you want to investigate coverage on a machine where Collector is installed, your can pause syncing to Code Central. This will allow you to limit your analysis to only that machine.

Troubleshooting

If you need to troubleshoot a pre-coverage filter, set the logging level of the project to Verbose. You can then check the profiling logs for entries that match the following pattern:

ClassLoaded ---- Name[%s] Explicit include(%s) exclude(%s)

This will show you exactly how NCover is applying pre-coverage filters in your coverage.

The post Best Practices for Pre-Coverage Filters appeared first on NCover.

Categories: Companies

Water Leak Changes the Game for Technical Debt Management

Sonar - Fri, 07/03/2015 - 09:07

A few months ago, at the end of a customer presentation about “The Code Quality Paradigm Change”, I was approached by an attendee who said, “I have been following SonarQube & SonarSource for the last 4-5 years and I am wondering how I could have missed the stuff you just presented. Where do you publish this kind of information?”. I told him that it was all on our blog and wiki and that I would send him the links. Well…

When I checked a few days later, I realized that actually there wasn’t much available, only bits and pieces such as the 2011 announcement of SonarQube 2.5, the 2013 discussion of how to use the differential dashboard, the 2013 whitepaper on Continuous Inspection, and last year’s announcement of SonarQube 4.3. Well (again)… for a concept that is at the center of the SonarQube 4.x series, that we have presented to every customer and at every conference in the last 3 years, and that we use on a daily basis to support our development at SonarSource, those few mentions aren’t much.

Let me elaborate on this and explain how you can sustainably manage your technical debt, with no pain, no added complexity, no endless battles, and pretty much no cost. Does it sound appealing? Let’s go!

First, why do we need a new paradigm? We need a new paradigm to manage code quality/technical debt because the traditional approach is too painful, and has generally failed for many years now. What I call a traditional approach is an approach where code quality is periodically reviewed by a QA team or similar, typically just before release, that results in findings the developers should act on before releasing. This approach might work in the short term, especially with strong management backing, but it consistently fails in the mid to long run, because:

  • The code review comes too late in the process, and no stakeholder is keen to get the problems fixed; everyone wants the new version to ship
  • Developers typically push back because an external team makes recommendations on their code, not knowing the context of the project. And by the way the code is obsolete already
  • There is a clear lack of ownership for code quality with this approach. Who owns quality? No one!
  • What gets reviewed is the entire application before it goes to production and it is obviously not possible to apply the same criteria to all applications. A negotiation will happen for each project, which will drain all credibility from the process

All of this makes it pretty much impossible to enforce a Quality Gate, i.e. a list of criteria for a go/no-go decision to ship an application to production.

For someone trying to improve quality with such an approach, it translates into something like: the total amount of our technical debt is depressing, can we have a budget to fix it? After asking “why is it wrong in the first place?”, the business might say yes. But then there’s another problem: how to fix technical debt without injecting functional regressions? This is really no fun…

At SonarSource, we think several parameters in this equation must be changed:

  • First and most importantly, the developers should own quality and be ultimately responsible for it
  • The feedback loop should be much shorter and developers should be notified of quality defects as soon as they are injected
  • The Quality Gate should be unified for all applications
  • The cost of implementing such an approach should be insignificant, and should not require the validation of someone outside the team

Even changing those parameters, code review is still required, but I believe it can and should be more fun! How do we achieve this?

water leak

When you have water leak at home, what do you do first? Plug the leak, or mop the floor? The answer is very simple and intuitive: you plug the leak. Why? Because you know that any other action will be useless and that it is only a matter of time before the same amount of water will be back on the floor.

So why do we tend to behave differently with code quality? When we analyze an application with SonarQube and find out that it has a lot of technical debt, generally the first thing we want to do is start mopping/remediating – either that or put together a remediation plan. Why is it that we don’t apply the simple logic we use at home to the way we manage our code quality? I don’t know why, but I do know that the remediation-first approach is terribly wrong and leads to all the challenges enumerated above.

Fixing the leak means putting the focus on the “new” code, i.e. the code that was added or changed since the last release. Things then get much easier:

  • The Quality Gate can be run every day, and passing it is achievable. There is no surprise at release time
  • It is pretty difficult for a developer to push back on problems he introduced the previous day. And by the way, I think he will generally be very happy for the chance to fix the problems while the code is still fresh
  • There is a clear ownership of code quality
  • The criteria for go/no-go are consistent across applications, and are shared among teams. Indeed new code is new code, regardless of which application it is done in
  • The cost is insignificant because it is part of the development process

As a bonus, the code that gets changed the most has the highest maintainability, and the code that does not get changed has the lowest, which makes a lot of sense.

I am sure you are wondering: and then what? Then nothing! Because of the nature of software and the fact that we keep making changes to it (Sonarsource customers generally claim that 20% of their code base gets changed each year), the debt will naturally be reduced. And where it isn’t is where it does not need to be.

Categories: Open Source

NDC talk on SOLID in slices not layers video online

Jimmy Bogard - Thu, 07/02/2015 - 20:21

The talk I gave at NDC Oslo 2015 is up on SOLID architecture in slices not layers:

https://vimeo.com/131633177

In it I talk about flipping this style architecture:

To one that focuses on vertical deliverable features:

Enjoy!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Top Five Highest-Rated Software Testing Tools at uTest

uTest - Thu, 07/02/2015 - 17:35

Software testing tools will never replace the job of a tester and don’t have the heart and creativity that testers bring to the table when conducting exploratory testing. That being said, without a tried and ‘tested’ toolkit at their disposal, testers would arguably be ill-equipped to get their jobs done as effectively as they do. […]

The post Top Five Highest-Rated Software Testing Tools at uTest appeared first on Software Testing Blog.

Categories: Companies

The State of Testing Report for 2015 is out!

PractiTest - Thu, 07/02/2015 - 12:32

State of testing 2015It has been a little time in the making!

It definitely took more that what we expected it to take originally, but what’s important is that this year’s State of Testing Report for 2015 is finally out and ready to be downloaded and shared freely :-)

I don’t want to spoil all the fun of going over the results and the interesting things you can learn from them, or from comparing this year’s results with the ones from our previous State of Testing Survey for 2013.

Still, in a nutshell I believe that we can see how our Industry is progressing and pushing all of us to become more professional testers; and how a large part of our testing teams are adapting to the way our organizations turn into a more agile and leaner working environments.

I wanted to thank my “partner in crime” for this project, Lalitkumar Bhamare from Teatime with Testers for all the hard work he put into this process.

I also wanted to thank our survey review panel: Leah Stockley, Michael Larsen, Keith Klein, Jerry Weinberg and Trish Khoo; for all their comments and their help.

And finally many thanks also to all our collaborators (that you can see in the download page) that helped us spread the word and make this survey and report reach more and more testers each year!

We’ll be happy to hear your feedback.

Categories: Companies

2015 State of Medical Device Development Survey: June Winners

The Seapine View - Wed, 07/01/2015 - 20:31

The responses are pouring in for the 2015 State of Medical Device Development Survey. We’ve already heard from over 140 medical device development professionals and are seeing some interesting trends.

We’ve also drawn the first two random gift card winners. Congratulations, Michael F. and Stefan B.—we’ll be emailing you each a $25 Amazon gift card soon!

One question asked is “What are the three most difficult areas to balance in your product development process?”

2015 Medical Device Development Survey

Early responders identified “delivery expectations” as the most difficult area to balance within a product development process, with “process and procedures” a close second.

What do you think? Take 10 minutes and share your insights now. We’ll draw another two Amazon gift card winners at the end of July!

The post 2015 State of Medical Device Development Survey: June Winners appeared first on Blog.

Categories: Companies

Announcing Our Winning Testers of the Quarter for Q2 2015!

uTest - Wed, 07/01/2015 - 18:37

First of all, I would like to thank everyone who participated in the voting for this quarter’s awards. We started this quarterly recognition program to give credit to members of our community who have excelled in their roles, and more importantly, added value to other members through the consistency and quality of their work. As […]

The post Announcing Our Winning Testers of the Quarter for Q2 2015! appeared first on Software Testing Blog.

Categories: Companies

End-to-end Hypermedia: Building a React Client

Jimmy Bogard - Wed, 07/01/2015 - 18:06

In the last post, I walked through what is to me the most interesting part of REST – the client. It’s easy to build a server API, but no API is complete without someone actually using that API. This is where most REST examples fall down for me – they show all sorts of pretty pictures of hypermedia-rich JSON from the server, but no real examples of how to consume that API.

I walked through some jQuery code in the last post, but why stop with jQuery? That’s so 2010. Instead, I want to build around React. React is perfect for hypermedia because of its component-oriented nature. A resource’s representation can be broken down into its components, and React components then matched accordingly. But before we get into the client, I’ll need to modify my sample to consume React.

Installing React

As a shortcut, I’m just going to use ReactJS.Net to build React into my existing MVC app. I install the ReactJS.Net NuGet package, and add a script reference to my downloaded react.js library. Normally, I’d go through the whole Bower/npm path, but this seemed like the simplest path to integrate into my sample.

I’m going to create just a blank JSX file for all my React components for this page, and slim down my Index view to the basics:

<h2>Instructors</h2>
<div id="content"></div>
@section scripts{
    <script src="@Url.Content("~/Scripts/react-0.13.3.js")"></script>
    <script src="@Url.Content("~/Scripts/InstructorInfo.jsx")"></script>
    @{
        var href = Url.Action("Index", "Instructor", new {httproute = ""});
    }
    <script>
        React.render(
            React.createElement(InstructorsInfo, {href: '@href'}),
            document.getElementById("content")
        );
    </script>
}

All of the div placeholders are removed except one, for content. I pull in the React library and my custom React components. The ReactJS.Net package takes my JSX file and transpiles it into Javascript (as well as builds the needed files for in-browser debugging). Finally, I render my base React component, passing in the root URL for kicking off the initial request for instructors, and the DOM element in which to render the React component into.

Once I’ve got the basic React library up and running, it’s time to figure out how we would like to componentize our page.

Slicing our Page

If we look at the page we want to create, we need to take this page and create React components from the parts we find. Here’s our page from before:

Looking at this, I see three individual tables populated with collection+json data. I’m thinking I create one overall component composed of three individual items. Inside the table, I can break things up into the table, rows, header, cells and links:

I might need a few more, but this is a good start. Next, we can start building our React components.

React Components

First up is our overall component that contains our three tables of collection+json data. Since I have an understanding of what’s getting returned on the server side, I’m going to make an assumption that I’m building out three tables, and I can navigate links to drill down to more. Additionally, this component will be responsible for making the initial AJAX call and keeping the overall state. State is important in React, and I’ve decided to keep the parent component responsible for the resource state rather than each table. My InstructorInfo component is:

class InstructorsInfo extends React.Component {
  constructor(props) {
    super(props);
    this.state = {
      instructors: { },
      courses: { },
      students: { }
    };
    this._handleSelect = this._handleSelect.bind(this);
  }
  componentDidMount() {
    $.getJSON(this.props.href)
      .done(data => this.setState({ instructors: data }));
  }
  _handleSelect(e) {
    $.getJSON(e.href)
      .done(data => {
        var state = e.rel === "courses"
          ? { students: {}}
          : {};

        state[e.rel] = data;

        this.setState(state);
      });
  }
  render() {
    return (
      <div>
        <CollectionJsonTable data={this.state.instructors}
          onSelect={this._handleSelect} />
        <CollectionJsonTable data={this.state.courses}
          onSelect={this._handleSelect} />
        <CollectionJsonTable data={this.state.students}
          onSelect={this._handleSelect} />
      </div>
    )
  }
}

I’m using ES6 here, which makes building React components a bit nicer to work with. I first declare my React component, extending from React.Component. Next, in my constructor, I set up the initial state, a object with empty values for the instructors/courses/students state. Finally, I set up the binding for a callback function to bind to the React component as opposed to the function itself.

In the componentDidMount function, I perform the initial AJAX call and set the instructors collection state based on the data that gets back. The URL I use to make the initial call is based on the “href” of my components properties.

The _handleSelect function is the callback of the clicked link way down on one of the tables. I wanted to have the parent component manage fetching new collections instead of a child component figuring out what to do. That method makes the AJAX call based on the “href” passed in from the collection+json data, gets the state back and updates the relevant state based on the “rel” of the link. To make things easy, I matched up the state’s property names to the rel’s I knew about.

Finally, the render function just has a div with my three CollectionJsonTable components, binding up the data and select functions. Let’s look at that component next:

class CollectionJsonTable extends React.Component {
  render() {
    if (!this.props.data.collection) {
      return <div></div>;
    }
    if (!this.props.data.collection.items.length){
      return <p>No items found.</p>;
    }

    var containsLinks = _(this.props.data.collection.items)
      .some(item => item.links && item.links.length);

    var rows = _(this.props.data.collection.items)
      .map((item, idx) => <CollectionJsonTableRow
        item={item}
        containsLinks={containsLinks}
        onSelect={this.props.onSelect}
        key={idx}
        />)
      .value();

    return (
      <table className="table">
        <CollectionJsonTableHeader
          data={this.props.data.collection.items}
          containsLinks={containsLinks} />
        <tbody>
          {rows}
        </tbody>
      </table>
    );
  }
}

This one is not quite as interesting. It only has the render method, and the first part is just to manage either no data or empty data. Since my data can conditionally have links, I found it easier to inform child components whether or not links exist (through the lodash code), rather than every component having to re-figure this out.

To build up each row, I map the collection+json items to CollectionJsonTableRow components, setting up the necessary props (the item, containsLinks, onSelect and key items). In React, there’s no event aggregator so I have to pass down a callback function to the lowest component via properties all the way down. Finally, since I’m building a collection of components, it’s best practice to put some sort of key on these items so that React knows how to re-render correctly.

The final rendered component is a table with a CollectionJsonTableHeader and the rows. Let’s look at that header next:

class CollectionJsonTableHeader extends React.Component {
  render() {
    var headerCells = _(this.props.data[0].data)
      .map((datum, idx) => <th key={idx}>{datum.prompt}</th>)
      .value();

    if (this.props.containsLinks) {
      headerCells.push(<th key="links"></th>);
    }

    return (
      <thead>
        <tr>
          {headerCells}
        </tr>
      </thead>
    );
  }
}

This component also only has a render method. I map the data items from the first item in the collection, producing header cells based on the prompt from the collection+json data. If the collection contains links, I’ll add an empty header cell on the end. Finally, I render the header with the header cells in a row.

With the header done, I can circle back to the CollectionJsonTableRow:

class CollectionJsonTableRow extends React.Component {
  render() {
    var dataCells = _(this.props.item.data)
      .map((datum, idx) => <td key={idx}>{datum.value}</td>)
      .value();

    if (this.props.containsLinks) {
      dataCells.push(<CollectionJsonTableLinkCell
        key="links"
        links={this.props.item.links}
        onSelect={this.props.onSelect} />);
    }

    return (
      <tr>
        {dataCells}
      </tr>
    );
  }
}

The row’s responsibility is just to build up the collection of cells, plus the optional CollectionJsonTableLinkCell. As before, I have to pass down the callback for the link clicks. Similar to the header cells, I fill in the data value (instead of the prompt). Next up is our link cell:

class CollectionJsonTableLinkCell extends React.Component {
  render() {
    var links = _(this.props.links)
      .map((link, idx) => <CollectionJsonTableLink
        key={idx}
        link={link}
        onSelect={this.props.onSelect} />)
      .value();

    return (
      <td>{links}</td>
    );
  }
}

This one isn’t so interesting, it just loops through the links, building out a CollectionJsonTableLink component, filling in the link object, key, and callback. Finally, our CollectionJsonTableLink component:

class CollectionJsonTableLink extends React.Component {
  constructor(props) {
    super(props);
    this._handleClick = this._handleClick.bind(this);
  }
  _handleClick(e) {
    e.preventDefault();
    this.props.onSelect({
      href : this.props.link.href,
      rel: this.props.link.rel}
    );
  }
  render() {
    return (
      <a href='#' rel={this.props.link.rel} onClick={this._handleClick}>
        {this.props.link.prompt}
      </a>
    );
  }
}
CollectionJsonTableLink.propTypes = {
  onSelect: React.PropTypes.func.isRequired
};

The link clicks are the most interesting part here. I didn’t want my link itself to have the behavior of what to do on click, so I call my “onSelect” prop in the click event from my link. The _handleClick method calls the onSelect method, passing in the href/rel from the collection+json link object. In my render method, I just output a normal anchor tag, with the rel and prompt from the link object, and the onClick event bound to the _handleClick method. Finally, I indicate that the onSelect prop is required, so that I don’t have to check for its existence when the link is clicked.

With all these components, I’ve got a working example:

I found working with hypermedia and React to be a far nicer experience than just raw jQuery. I could reason about individual components at the same level as the hypermedia controls, matching what I was building much more effectively to the resource representation returned. I still have to have some sort of knowledge of how I’m going to navigate the links and what to do, but that logic is all encapsulated in my topmost component.

Each of the sub-components aren’t tied to my overall logic and can be re-used as much as I want across my application, allowing me to use collection+json extensively and not worry about having to parse the result again and again. I’ve got a component that can effectively render a nice table based on a collection+json representation.

Next, we’ll kick things up a notch and build out a React.Native implementation, pushing the limit of hypermedia with a dynamic native mobile client.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

It worked on my machine – Communication Trap

Sauce Labs - Wed, 07/01/2015 - 18:00

“I don’t see it on my machine!” said every developer ever. Every QA professional I have talked to in my career has heard this at least once.

But why?

Have we asked what’s in a bug?

The answer can either be what gets your team on the road to efficiency, or it can become a kink in the delivery hose. Let’s discuss how your QA can help the team deliver faster by providing a consistent language to keep everyone on target.

Don’t Let The Bad Bugs Bite…

Over the last decade, I have seen issues that have almost no noted content in them (seriously, some have just declared something to the tune of “This feature is… not working”). Then there are tickets that are the golden standard, that have all the information you could possibly want (and probably some with more than you need that turn out to be a few bugs in themselves).

But what happens when you don’t have a common way to report a ticket, and why is it important?

I just came across an issue recently that seemed to have some steps to reproduce, but the setup was not included. Try as I might, I could not replicate the bug. The only way that I could come close to the reported result did not match the steps provided, and I could only guess that the setup I created was what the reporter had done. I will let you guess how long this issue took. Hint: It wasn’t a few hours.

Or perhaps you have an offshore team. I’ve seen many, many instances when someone reports a bug that just doesn’t have enough information in it. If the engineer cannot figure out exactly what the issue is, and has to place it on hold, back to the reporter, the engineer waits another night while the person on the other side of the world hopefully notices the ticket is back in his or her queue for more details. That is another full day that the bug exists, delaying when the root cause can be identified and the issue fixed.

Depending on the makeup of your team, and whether you are in an automated or manual setup — you need to consider how the issue will be verified. The person testing the fix (or writing the automated test to ensure the issue does not occur again) may not be the one who reported it. (Again, more time is spent figuring out how to test if the fix is correct.)

The bottom line? The back and forth that occurs from a poorly reported bug is costly in terms of time and resources.

Cut The Chit Chat

Having a uniform language/template will help reduce uncertainty across the board, and reduce the time a bug is spent unresolved. But what should be included in a bug report to cut out this back and forth, and keep the team on track?  There are several other things you may want to consider adding, but these are some of the top things I like to see from a tester:

  • Summary/Title: This should be succinct yet descriptive. I almost try to make these sound like a user story <user> <can/cannot><do x action> in <y feature>. When I sit in a triage meeting, can I tell what the issue is just by reading the summary?
  • Environment: every now and then we come across bugs that are very specific to the OS, database type, browser, etc.  Without listing this information, it’s all too easy to say ‘Can’t reproduce’, only to have a client find it in the field.
  • Build: Hopefully you are testing on the latest build, but if for some reason you have servers that are updated at different rates than others, you need to pinpoint when exactly the bug was found.
  • Devices: if you’re doing any type of mobile testing, what type of device were you using? What version? If you found a bug on the web app, do you see it on the mobile app too? Which one? Android or iOS?
  • Priority: The priorities are all relatively standard across the field — Critical, High, Medium and Low. Have criteria defined up front so everyone is on the same page as to what constitutes each selection.
  • Steps to reproduce: Not just ‘When I did this, it broke.’  Really break it down, from login and data setup to every click you make.
  • Expected Result vs. Actual Result: What were you expecting, and why?  What happened instead?
  • Requirements and Wireframes: This helps to point to why testing occurred, and why someone wrote up a bug and linked it back to the originating artifact, though hopefully you are on the same page upfront, before development begins. Sometimes things slip through and perhaps an engineer has a different understanding of a feature than the tester. Being able to point back to why you think an element is a bug is helpful, and gets you all on the same page.

Of course, there are people other than your traditional testers writing bugs, and it is essential to use your QA to drive conformity. Perhaps your UX team is performing audits, or you have bug bashes where people from other departments are invited to test the system and find bugs, or you have someone new to the team that simply needs training. Having a template will ensure clarity and reduce inefficiencies, regardless of who enters the ticket.

Utilize QA to promote consistency, get bugs out of purgatory, and drive faster delivery.

Ashley Hunsberger is a Quality Architect at Blackboard, Inc. and co-founder of Quality Element. She’s passionate about making an impact in education and loves coaching team members in product and client-focused quality practices.  Most recently, she has focused on test strategy implementation and training, development process efficiencies, and preaching Test Driven Development to anyone that will listen.  In her downtime, she loves to travel, read, quilt, hike, and spend time with her family.

T: @aahunsberger
L: https://www.linkedin.com/in/ashleyhunsberger

Categories: Companies

Code Dx Code Analysis Tool on Bitnami

Software Testing Magazine - Wed, 07/01/2015 - 17:40
Code Dx, Inc., a provider of a robust suite of fast and affordable tools that help software developers and security analysts find, prioritize and visualize software vulnerabilities, today announced the availability of its code analysis tool on Bitnami, a marketplace that makes it simple to find popular server applications and development environments and deploy them in just a few clicks. Code Dx is now standardizing deployment with Bitnami installation technology for Linux, Windows and Mac OS X. After one quick installation, users are able to automatically configure and run Code Dx ...
Categories: Communities

Keynote Adds Appium to Support Mobile Testing

Software Testing Magazine - Wed, 07/01/2015 - 17:29
Keynote has announced the integration of Keynote Mobile Testing with Appium, an open source framework designed to help automate the testing of native and web iOS and Android mobile applications. The new integration will allow organizations to improve the quality of apps by enabling efficient collaboration between quality assurance (QA) practitioners doing end-to-end testing and developers automating unit tests. By integrating Keynote Mobile Testing with Appium, developers and QA professionals can pair a high-fidelity interactive mobile testing environment with a common automated scripting test framework and run tests across real devices ...
Categories: Communities

NeoSense 1.1 Launched

Software Testing Magazine - Wed, 07/01/2015 - 17:04
Neotys has announced NeoSense 1.1, an enhanced version of its synthetic monitoring solution for application performance and availability. The release of NeoSense 1.1 adds powerful new capabilities for web and mobile applications to quickly create realistic monitoring profiles even for complex business apps using the latest technologies. The solution enables you to generate synthetic users in production and actively monitor the performance and availability of critical business transactions within recorded user paths to detect and automatically alert you of any issues before they become problems for real users. NeoSense 1.1 Key Enhancements Greater ...
Categories: Communities

Should Every Tester Learn To Program?

Gurock Software Blog - Wed, 07/01/2015 - 16:51

This is a guest posting by Simon Knight. Simon Knight works with teams of all shapes and sizes as a test lead, manager & facilitator, helping to deliver great software by building quality into every stage of the development process.

header

Sometimes, I like to think of my teammates as a kind of band of adventuring heroes. Usually, I don’t mention this to them, in case they start looking at me funny. But when I gaze across the room at the product owner, if I squint and use my imagination a bit, I can see a bit of Elf in her. Probably she’s handy with a bow and arrow. If she was a World of Warcraft character, I reckon she’d be a Hunter class; great at long range thinking and decision making.

Sitting next to me I have a Warrior class programmer. Master of the weapons of his trade. Capable of slaying a tricky line of code with a single blow. Myself I think of more as a kind of Shaman. I get to call on elemental powers, unleashing the primordial forces of skilled testing to support the team and help them deliver a great product.

Mostly I’m happy enough doing my thing. I’ve levelled up quite a bit. My spells have power and the team appreciate them. Every so often though, I’ll watch my warrior buddy slicing and dicing code and think, it would be great to carve a few lines myself. I’m not going to be a full time code warrior, so I don’t need a broadsword or a battle-axe, or anything heavyweight. My weapon of choice probably wouldn’t be an Oathkeeper. I’d prefer something with a bit more Sting.

Context Is King

There’s some solid arguments for learning a bit of programming swordplay. Being able to slice beneath the surface of the application code means you can better understand what’s going wrong, and why. Your bug reports will be better informed and the additional information you provide to developers when raising defects means shorter feedback loops.

Of course the software you’re working with comes in many shapes and sizes. And you don’t have to interact with it in the way the developers intended. If you’re working on a web application for example, operating the software while keeping a browser dev tools window open will provide you with a view of the client-side code and resources. Running your software through a proxy can help you to see what requests are being sent and received. Being alert to application behaviour by way of a logging or monitoring tool can help you see even further down the stack.

The nature of the software your team is working on is ultimately going to determine what tools (or weapons) will serve you best. Testing a web site is going to be very different to testing a native application, or a piece of embedded software. There’s a common thread though. Whatever kind of software you’re testing, the engine beneath it, the thing that makes it all hang together, is code. The software behaves the way it does because that’s the way the developers have programmed it to.

So given that, as a software tester, your job is to test software that’s driven by code – it makes sense to learn how to understand how code works, right? Well, you’d think. But it turns out, there’s some debate about this very subject. Not everyone agrees that testers should learn to write code.

“There’s some debate about this very subject. Not everyone agrees that testers should learn to write code.” – Simon Knight Tweet this quote

sign

What’s All The Controversy?

If you follow some of the testing voices by way of their blogs and other social media, you may have come across arguments against learning to code like these:

  • Being able to code will make you think more like a computer and less like a person.
  • If you know how to code you’ll spend more time doing that and less time testing.
  • Learning to code comes at the expense of learning other, equally or more important skills.
  • Developers are better at programming anyway. Testers should focus on testing and let the developers write the code.

You’ll also see arguments in favour of learning to code, like these:

  • You’ll be better able to speak the language of your developers.
  • You’ll have a better understanding of the complexities and accompanying risks of development.
  • You’ll empathise with your teammates better when you understand the coding problems they face on a daily basis.

With all of this controversy around the subject, you could be forgiven for wanting to sit on the fence. But there’s more:

Anyone who is serious about a career in testing would do well to pick up at least one programming language. – Elisabeth Hendrickson

Back in 2010 Elisabeth Hendrickson carried out some market research and observed that 80% of the advertised testing roles they looked at seemed to require some kind of programming experience.

More recently, Rob Lambert speaking as a hiring manager notes that although there might once have been an argument for less technical testers who focused more on the big picture business scenarios, there are now plenty of testers who can write compelling test scenarios and develop the code to execute them as checks too.

It’s no longer enough to be a tester who doesn’t code, because when you apply for a job you may be up against a tester similar to you who can code. – Rob Lambert

If we take those last two points as arguably the most compelling reasons for learning a programming language or two, you’d think that life would become a bit clearer. You just need to make a decision about what programming language to learn and get on with it, right?

Wrong. Turns out, even that’s not straightforward.

What Does Learning To Code Mean Anyway?

config

Figuring out where to start learning anything can be tough. There’s so many choices! Should you go to college? Do an online course? Read some blog posts? Buy a book?

Learning a programming language can be particularly difficult. Before you start studying, how do you even know which language to choose? Figuring out answers to the questions below may help to narrow down your options:

What kind of work are you doing?
The software you test and the platforms it needs to work on may have a bearing on the kind of programming language it will be useful for you to know. If you’re working mostly on the client side of a web application, learning some Javascript might be more useful than learning some Java.

What does your code need to do?
Having a specific purpose or task that your code will be used for, in addition to being a great motivational tool for learning it in the first place, can be used to steer your decision about which language to actually learn. If you just need to create some data, a scripting language or some variety of SQL may be sufficient. If you need to develop a tool that’s intended to be a bit longer term, something more heavyweight like C# or Java may be required.

Where will your code be run?
Does your code need to work in a browser? On the server? On a desktop? In Windows, OS X and Linux? On a mobile device? The platform on which your code needs to be developed and run (or run against) should be a consideration.

What is everyone else using?
If you’re working on a team or project and everyone else is using C#, developing your scripts and tools in Ruby may not be the best idea. In addition to not wanting to upset folk, you want to take advantage of all of the experience and knowledge that’s around you, right? Find out what the preferred language of your colleagues is, and why. Then ask them for their advice and support in getting started with that language. Most often, they’ll be happy to provide it.

What skills is the market looking for?
Scanning job advertisements will provide a good indication of what skills are hot in the marketplace right now (as Elisabeth demonstrated back in 2010). You need to balance the ebb and flow of fashionable skills with longer term trends though.

Learning to Program

book

If you’ve chosen a programming language to get started with, you’ve cleared the first hurdle. It’s time to start developing those skills! The strategies below will help you on your way.

Talk to someone else who writes code
You’re probably already in one of the best places to learn programming; amongst developers! Go out of your way to talk to them and get them sharing their knowledge. Ask them to show you how their code works. Even better – sit with them as it’s being done. This way, you can add value by sharing your testing ideas while the code is still being written.

Look for examples of the same code in lots of languages
If you decide to read a book or some blog posts that provide typical examples, don’t just read code for the language you’ve decided to learn. Look at example code for other languages as well. Try to understand both the differences and the similarities so you can start to understand underlying patterns and principles.

Write some of your own code
Writing your own code will add depth to your understanding that simply can’t be achieved just by reading somebody else’s. And programming isn’t just about writing the code in any event. You have to setup your environment, familiarise yourself with the tools and carry out various other tasks along the way. There’s really no substitute for learning by actually doing the work.

Make changes and test them
Once you’ve gotten started with actually working through some examples, and hopefully got them working, start to make some changes. Apply your exploratory testing skills. Formulate a hypothesis, make a small adjustment, then observe the result. Look for ways to improve and optimise your code.

Learn to understand the compiler and debug your code
While you make changes you’ll probably experience some compilation or execution failures. The console will report an error of some kind, but do you understand what the error message means? Learning to search for reports of the same problem will be invaluable at this stage of your learning curve. Consider adding some logging to your code so that problems can be traced and pinpointed more accurately.

Look for things to do with your code
As your confidence grows and useful examples to learn from start to thin out, it’s time to start looking for ways to implement your learning at work. Mechanical activities that have to be repeated often are great candidates for code or scripted execution. But what about smaller, more ad-hoc tasks like data creation, extraction or manipulation? Try to think about ways you could carry out day-to-day tasks by writing simple scripts.

Store, share and re-use your code
As your portfolio of scripts and code grows, you’ll need somewhere to keep it all so you can refer to and re-use it. If you haven’t already done so, investigate some version control tools and code repositories. Git and Github are very popular (though other options are of course available) and Github makes it very easy to store and share your code with others.

Read someone else’s code
If you’re already working on a software project, why not download the source code and read through it? Many developers try to follow a test-driven approach to development, so unit tests are a great place to start. Some development tools will also let you step through the code as it’s being executed, which is a great way to see how it works in action.

Work on code somebody else wrote
Open source projects are a good way to start putting your new found skills to the test. Some of the software you use on a day to day basis is probably open source. Next time you use a tool, think about ways in which it might be improved? Join the mailing list for the development group or search the web for features under development and bugs that have been logged. Try to fix a problem in the source code or find some other way to contribute.

Practice, practice, practice!
If you want to get really good then you’ve got to keep putting in the work. It may not take ten thousand hours, but you should certainly expect to put in a significant amount of effort to become anything near competent. And even then, practice isn’t necessarily the same thing as experience. Professional developers solve all kinds of engineering problems on a daily basis. If you’re serious about learning to program, try to get some experience with production code. In these days of cross-functional teams, it shouldn’t be too hard.

Is Learning to Program Really Worth All The Effort?

github

There are many routes you could take towards becoming a highly skilled tester who is able to add significant value to the projects and teams on which you work. Leveling up your coding swordplay is just one of them. And one will that require significant time and effort.

So is it really worth it? Some people say that just learning to read code is enough.

“I advocate learning to read code over coding. Coding well can take a significant investment in time and practice.” – Alan Parkinson

Others argue that learning about programming (James Bach) or learning about IT (Patrick Prill) may be more valuable. These are all perfectly valid arguments, but learning about programming and IT is analogous to learning about swordplay and hand-to-hand combat.

Learning to read code would be like learning to read and predict an opponents movements in a fight. Useful skills to be sure. But not quite the same thing as being able to fight back.

The way that you learn how to handle a sword is by picking one up. Feeling the weight. Learning to swing, thrust and parry. Sparring against a real, live opponent once you’ve learned some moves. Learning to program is the same.

Reading a book or watching a few training video’s really won’t cut it. You need to sit at the keyboard and practice your moves, over and over. As you do so, your programming skills will start to emerge, and your understanding of what professional development looks like below the surface will grow.

This guide should get you started, but if you have some strategies that you’ve found particularly helpful in levelling up your programming skills, we’d love to hear about them in the comments section below.

PS: Have you found this article useful? We will have more relevant testing & QA related articles soon on topics like building a great testing team, improving your testing career or leveling up your testing skills. Make sure to subscribe below via email and follow-us on Twitter!

Photo credit: book and sign photos by Francois Schnell: here and here.
Categories: Companies

GTAC 2015: Call for Proposals & Attendance

Google Testing Blog - Tue, 06/30/2015 - 23:11
Posted by Anthony Vallone on behalf of the GTAC Committee

The GTAC (Google Test Automation Conference) 2015 application process is now open for presentation proposals and attendance. GTAC will be held at the Google Cambridge office (near Boston, Massachusetts, USA) on November 10th - 11th, 2015.

GTAC will be streamed live on YouTube again this year, so even if you can’t attend in person, you’ll be able to watch the conference remotely. We will post the live stream information as we get closer to the event, and recordings will be posted afterward.

Speakers
Presentations are targeted at student, academic, and experienced engineers working on test automation. Full presentations are 30 minutes and lightning talks are 10 minutes. Speakers should be prepared for a question and answer session following their presentation.

Application
For presentation proposals and/or attendance, complete this form. We will be selecting about 25 talks and 200 attendees for the event. The selection process is not first come first serve (no need to rush your application), and we select a diverse group of engineers from various locations, company sizes, and technical backgrounds (academic, industry expert, junior engineer, etc).

Deadline
The due date for both presentation and attendance applications is August 10th, 2015.

Fees
There are no registration fees, but speakers and attendees must arrange and pay for their own travel and accommodations.

More information
You can find more details at developers.google.com/gtac.

Categories: Blogs

Anarchy In Arkham; Multi-Platform Testing Is a Bare Minimum For Gaming

uTest - Tue, 06/30/2015 - 21:36

One of the endearing traits of the video game sector is that when it identifies an ongoing revenue stream, it makes sure that it milks that cash cow to death. Irrespective of the genre  – sports, fantasy, action, shooters, massively-multiplayer-online-first-person-shooter, fantasy-fighting, role-playing, strategy to name just a few – developers and publishers now want to […]

The post Anarchy In Arkham; Multi-Platform Testing Is a Bare Minimum For Gaming appeared first on Software Testing Blog.

Categories: Companies

The Rules I Live By

Testlio - Community of testers - Tue, 06/30/2015 - 17:01

 

  1. Sleep enough.

Make sure you are getting the rest you need. So you can work hard at the things that really matter, with the people that really matter. That source of energy allows you to not only welcome a challenge but overcome it.

 

  1. Tell the ones you love that you love them. Over and over and over again.

There isn’t a person in the world that doesn’t enjoy hearing the words “I love you” or “You matter to me”. When someone matters so much, let them know. You never know when it will be too late to tell someone what they mean to you. Why not do it now, and tomorrow, and the next day.

 

  1. Show appreciation everywhere you can. Even the seemingly small places.

Gratitude. Be grateful for the things you have. Count your blessings. When a person can truly appreciate the things they already have, they open themselves up to all other incredible experiences life has to offer. Express appreciation and gratitude with the slightest touch, a smile to a stranger, a kind word, an honest compliment, or a tiny act of caring.

 

  1. Eat healthy, but not too healthy.

Take care of your body, its the only one you get. But allowing yourself to indulge every once in a while is also a requirement. You only get one body but you also only have one life.

 

  1. Cookies are good for the soul.

There’s nothing more that needs to be said with this one. This rule is undebatable.

 

  1. Be true to you.

Authenticity is a gift. Be who you are at your core. Always, no matter what, especially when it’s not the easy thing to do. Stand up for the ideas and people that matter. It can be extremely difficult to take a stance on something you believe when it seems everyone else is going in the opposite direction. But these moments allow your true character to shine through. These moments are your chance to prove what kind of person you are and what kind of person you want to be. Be someone that inspires others.

 

  1. Laugh

Laugh as often as possible, too often, and so hard that you throw your head back and almost pee in your pants.

 

  1. Listen

Listen with full attention. Show that you care. Show that you want to connect. Listen, not only for your turn to speak, but to understand.

 

  1. Learn

Exercise your mind in every way imaginative. Explore it. Challenge it.

Engage in a constant quest for knowledge and truth. Keep your mind active, curious, and hungry.

 

  1. Lead with Compassion

Be a sense of comfort for as many people as you can in your lifetime. Everyone is going through something.

 

Connect with Michelle here

The post The Rules I Live By appeared first on Testlio.

Categories: Companies

Fighting Technical Debt: Memory Leak Detection in Production

Thanks to our friends from Prep Sportswear who let me share their memory leak detection story with you. It is a story about “fighting technical debt” in software that matured over the years with initial developer’s no longer on board to optimize/fix their code mistakes. Check out their online store and browse through their pages […]

The post Fighting Technical Debt: Memory Leak Detection in Production appeared first on Dynatrace APM Blog.

Categories: Companies

Free Web Load Testing Services

Software Testing Magazine - Tue, 06/30/2015 - 09:00
The software development trend that shifts the target platform from the desktop to web, cloud and mobile applications has fostered the development of load testing services on the web. It is an obvious option to use web-based load testing tools for applications that can be accessed by web users. This article presents the free offers from commercial web load testing services providers. We have considered in this article only the tools that provides a load testing service that we define as the ability to simulate the access by multiple users on ...
Categories: Communities

As Jenkins Grows Up, We Invite Our Business Partners To Grow With Us.

As I am writing this post, CloudBees reached a milestone in the number of employees. I think the milestone hit many of us by surprise. “Really,” we thought. “So soon?” But if you look back over the past couple of quarters, it’s pretty apparent that our internal growth was inevitable.  
The number of Jenkins deployments is rapidly rising. At last measure, there are more than 100,000 active installations of Jenkins running. And, as enterprise companies deploy more and more Jenkins, the need for enterprise-grade solutions are accelerating at a very similar rate. A recent blogby CloudBees CEO Sacha Labourey discusses how organizations are transforming their use of Jenkins as a Continuous Integration (CI) tool to using it as a platform to bring enterprise-wide Continuous Delivery (CD). And as our customers have matured their deployments, so have the solutions and offerings from CloudBees, including the most recent launch of CloudBees Jenkins Platform.
The fact is… we are growing. And as we grow, our partners- resellers, services providers, training partners and technology partners- will all play an increasingly critical role delivering the enterprise-scale Jenkins solutions and complimentary tools and platforms our joint customers are seeking.  
Which is why we are committed to equipping our partners with the skills, resources and tools to help you get the most from the opportunity that Jenkins offers. Next month, CloudBees will announce new developments in our Partner Program to meet the needs of our growing partner ecosystem and to help all maximize the vast opportunities Jenkins presents. All current or potential partners- including global resellers, service providers and training partners are invited to attend our informational webinar on July 16 at 11 am ET. This presentation will provide an overview of the latest product developments and expanded opportunities available to partners to help grow your business through enterprise-scale Jenkins solutions.
We look forward to sharing these exciting developments with you next month and working with you to uncover new opportunities, deliver the latest in Jenkins innovations and solutions to our joint customers, and expand your business.

Durga SammetaGlobal Alliances and Channels

Durga is Senior Director of Global Alliances and Channels and is based in San Jose.


Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today