One thing I respect about uTest is their continual pursuit of ways to increase customer value. It’s an essential business objective to ensure the health and growth of our company. ‘Value’ should be the middle name of any good tester. “Lucas Value Dargis.” Sounds pretty cool, huh?
I had just finished my 26th uTest test cycle in mid-2012. I had put an extra amount of focus and effort into this cycle because there was something special at stake. On some occasions, uTest offers an MVT award which is given to the Most Valuable Tester of the cycle. The selection process takes several things into account including the quality of the bugs found, clear documentation, participation, and of course, customer value.
The MVT award not only offers a nice monetary prize, but it’s also a way to establish yourself as a top tester within the uTest Community. I decided I was going to win that MVT award.
As usual, I started by defining my test strategy. I took the selection criteria and the project scope and instructions into account and came out with these five strategic objectives:
- Focus on the customer-defined ‘focus’ area
- Report only high-value bugs
- Report more bugs then anyone else
- Write detailed, easy-to-understand bug reports
- Be active on the project’s chat
When the test cycle was over, I reflected on how well I’d done. I reported nine bugs — more than anyone else in the cycle. Of those, eight were bugs in the customer’s ‘focus’ area. The same eight were also rated as very or extremely valuable. All the bugs were documented beautifully and I was an active participant in the cycle’s chat.
There was no competition. No other tester was even close. I had that MVT award in the bag. I was thinking of all the baseball cards I could buy with the extra Cheddar I’d won. I even called my mom to tell her how awesome her son was! You can only imagine my surprise when the announcement was made that someone else had won the MVT award. Clearly there was some mistake…right? That’s not how you spell my name!
I emailed the project manager asking for an explanation for this miscarriage of justice. The tester who won had fewer bugs, none of them were from the ‘focus’ area and they weren’t documented particularly well. How could that possibly be worth the MVT award? The PM tactfully explained that while I had done well in the cycle, the tester who won had found the two most valuable bugs and the customer deemed them worthy of the MVT award.
I was reminded that my adopted definition of quality is “value to someone who matters” and suddenly it all fell into place. It didn’t matter how valuable I thought my bugs and reports were. It didn’t matter how much thought and effort I put into my strategy and work. At the end of the day, a tester’s goal, his or her mission, should be to provide “someone who maters“ with the most value possible. I’m not that “someone who matters.” That “someone” is our customer.
It was a hard pill to swallow, but that lesson had a strong impact on me and it will be something I’ll carry with me moving forward. Congratulations to the MVT. I hope you enjoy all those baseball cards.
A Gold-rated tester and Enterprise Test Team Lead (TTL) at uTest, Lucas Dargis has been an invaluable fixture in the uTest Community for 2 1/2 years, mentoring hundreds of testers and championing them to become better testers. As a software consultant, Lucas has also led the testing efforts of mission-critical and flagship projects for several global companies. You can visit him at his personal blog and website.
To read more, visit our blog at blog.sonatype.com.
Data-driven performance problems are not new. But most of the time it’s related to too much data queried from the database. O/R mappers like Hibernate have been a very good source for problem pattern blog posts in the past. Last week I got to analyze a new type of data-driven performance problem. It was on […]
The post Data Driven Performance Problems are Not Always Related to Hibernate appeared first on Compuware APM Blog.
Since I am publishing this on my personal blog, this is my personal view, the view of Markus Gärtner as an individual.
I think the first time I came across ISO 29119 discussion was during the Agile Testing Days 2010, and probably also during Stuart Reid’s keynote at EuroSTAR 2010. Remembering back that particular keynote, I think he was visibly nervous during his whole talk, eventually delivering nothing worth of a keynote. Yeah, I am still disappointed by that keynote four years later.
Recently ISO 29119 started to be heavily debated in one of the communities I am involved in. Since I think that others have expressed their thoughts on the matter more eloquently and deeper than I going to do, make sure to look further than my blog for a complete picture of the whole discussion. I am going to share my current state of thoughts here.Audits
In my past I have been part of a few audits. I think it was ISO 9000 or ISO 9001, I can’t tell, since people keep on confusing the two.
These audits usually had a story before the audit. Usually one or two weeks up-front I was approached by someone asking me whether I could show something during the audit that had something to do with our daily work. I was briefed in terms of what that auditor wanted to see. Usually we also prepared a presentation of some sorts.
Then came the auditing. Usually I sat together with the auditor and a developer in a meeting room, and we showed what we did. Then we answered some questions from the auditor. That was it.
Usually a week later we received some final evaluation. Mostly there were points like “this new development method needs to be described in the tool where you put your processes in.” and so on. It didn’t affect my work.
More interestingly, what we showed usually didn’t have anything to do with the work we did when the auditor left the room. Mostly, we ignored most of the process in the process tool that floated around. At least I wasn’t sure how to read that stuff anyways. And of course, on every project there was someone willing to convince you that diverting from whatever process was described was fruitful in this particular situation and context.
Most interestingly, based upon the auditing process people made claims about what was in the process description, and what the auditor might want to see. No one ever talked to them up-front (probably it wasn’t allowed, was the belief). Oh, and of course, if you audit something to improve it that isn’t the thing that you’re doing when you’re not audited, then you’re auditing bogus. Auditing didn’t prevent us from running into this trap. Remember: If there is an incentive, the target will be hit. Yeah, sounds like what we did. We hit the auditing target without changing anything real.
Skip forward a few years, and I see the same problems repeated within organizations that adopt CMMi, SPICE, you-name-it. Inherently, the fact that an organization has been standardized seems to lead to betrayal, mis-information, and ignorance when it comes to the processes that are described. To me, this seems to be a pattern among the companies that I have seen that adopted a particular standard for their work. (I might be biased.)Standards
How come, you ask, we adopt standards to start with? Well, there are a bunch of standards out there. For example, USB is standardized. So was PS/2, VGA, serial and parallel ports. These standards solve the problem of two different vendors producing two pieces of hardware that need to work together. The standard defines their commonly used interface on a particular system.
This seems to work reasonably for hardware. Hardware is, well, hard. You can make hard decisions about hardware. Software on the other hand is more soft. It reacts flexibly, can be configured in certain ways, and usually involves a more creative process to get started with. When it comes to interfaces between two different systems, you can document these, but usually a particular way of interface between software components delivers some sort of competitive advantage for a particular vendor. Though, when working on the .NET platform, you have to adhere to certain standards. The same goes with stuff like JBoss, and whatever programming language you may use. There are things that you can work around, there are others which you can’t.
Soft-skill-ware, i.e. humans, are even more flexible, and will react in sometimes unpredictable ways when challenged in difficult work situations. That said, people tend to diverge from anything formal to add their personal note, to achieve something, and to show their flexibility. With interfaces between humans, as in behavioral models, humans tend to trick the system, and make it look like they adhere to the behavior described, but don’t do so.ISO 29119
ISO 29119 tries to combine some of the knowledge that is floating around together. Based upon my experiences, I doubt that high quality work stems from a good process description. In my experience, humans can outperform any mediocre process that is around, and perform dramatically better.
That said, good process descriptions appear to be one indicator for a good process, but I doubt that our field is old enough for us to stop looking for better ways. There certainly are better ways. And we certainly haven’t understood enough about software delivery to come up with any behavioral interfaces for two companies working on the same product.
Indeed, I have seen companies suffer from outsourcing parts of a process, like testing, to another vendor, offshoring to other countries and/or timezones. Most of the clients I have been involved with were even suffering as much as to insource back the efforts they previously outsourced. The burden of the additional coordination was simply too high to warrant the results. (Yeah, there are exceptions where this was possible. But these appear to be exceptions as of now.)
In fact, I believe that we are currently exploring alternatives to the traditional split between programmers and testers. One of the reasons we started with that split, was Cognitive Dissonance. In the belief that a split between programmers and testers only overcomes Cognitive Dissonance, we have created an own profession a couple of decades ago. Right now, we find out with the uprising of cross-function teams in agile software development that that split wasn’t necessary to overcome Cognitive Dissonance. In short, you can keep an independent view if you can maintain a professional mind-set, while still helping your team to develop better products.
The question I am asking: will a standard like ISO 29119 keep us from exploring further such alternatives? Should we give up exploring other models of delivering working software to our customers? I don’t think so.So, what should I do tomorrow?
Over the years, I have made a conscious effort to not put myself into places where standards dominated. I put myself simply speaking into the position where I don’t need to care, and can still help deliver good software. Open source software is such an environment.
Of course, that won’t help you in the long run if the industry got flooded with standards. ISO 29119 claims it is based upon internationally-agreed viewpoints. Yet, it claims that it tries to integrate Agile methods into the older standards that it’s going to replace. I don’t know which specialist they talked to in the Germany Agile community. It certainly wasn’t me. So, I doubt much good coming out of this.
And yet, I don’t see this as my battle. Since a while I realized that I probably put too much on my shoulders, and try to decide which battles to pick. I certainly see the problems of ISO 29119, but it’s not a thing that I am wanting to put active effort to.
Currently I am working on putting myself in a position where I don’t need to care at all about ISO 29119 anymore, whatever will come out of it. However, I think it’s important that the people that want to fight ISO 29119 more actively than me are able to do so. That is why, they have my support from a far.
– Markus Gärtner
QA Wizard Pro’s scripting language includes a set of statements you can use to automatically create TestTrack issues to report errors found during script playback. The statements you use depend on the information you want to add to the issue.
You can use the AddIssue statement (named AddDefect in QA Wizard Pro 2014.0 and earlier) to create a brief issue with information only in the Summary, Description, Steps to Reproduce, and Other Hardware and Software fields. This is a simple way to create a new issue, provide some basic information in it, and add it to TestTrack at the same time.
After the issue is added, you can manually edit it in TestTrack or from the Issues pane in QA Wizard Pro to provide additional information.
You can also use advanced statements introduced in QA Wizard Pro 2014.1 to create an empty TestTrack issue object, set and work with specific field values, and then add the issue to the project. These statements (NewIssue, SetFieldValue, GetFieldValue, RemoveField, AddFileAttachment, and AddToTestTrack) allow you to set more issue field values, including custom fields, and add file attachments to create more thorough issues that require less time to edit or review later.
Check out the QA Wizard Pro help for more information about using these statements and examples.Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon
STARWEST is the premier event for software testers and quality assurance professionals – covering all your testing needs with 100+ learning and networking opportunities:
- Keynotes featuring recognized thought-leaders
- In-depth half- and full-day tutorials
- Conference sessions covering major testing issues and solutions
- Enjoy complimentary bonus sessions
- Pre-conference training classes
- The Expo, bringing you the latest in testing solutions
- Networking events including meeting the speakers, the Test Lab, and more!
We look forward to seeing you at our booth!
Virtualization is all around us, and you may be considering using virtual servers as a load generator. There is support for this option in LoadRunner and Performance, but the question to ask is “Are the tools that you are using to test your application affecting the results of those tests?”
Keep reading to find out how noise and other factors could be impacting your tests.
If you use one of our portfolio or enterprise packages you will see a number of changes in the portfolio level menus and features. All ticket users will see some upgrades to milestones. Please feel free to contact us if you have questions. For more control over these upgrades, please contact us in advance to become part of our customer advisory group. Send your comments, questions, and call requests to firstname.lastname@example.org.
Why? Our bigger customers often manage a list of what we now call “Fast IT” projects: websites, mobile apps, digital marketing, and SaaS related projects. They want a "system of record" where they can keep all of these assets together for future maintenance and improvement. We also found that they were using creative techniques to track the commitments they have made for delivering upgrades. Because Assembla was initially designed for a continuous agile process, we were not effectively helping them to deliver on the specific dates that they committed to. We intend to help them with new features for planning and tracking deliverables.
DETAILS OF THE PLANNED UPGRADES Project Portfolio Management
Support a large number of teams, spaces and deliverables, without losing anything:
- Dashboard overview of spaces, users, and upcoming deliverables
- Better way to group spaces for reporting. Replace the old “groups” system with tags so you can easily tag a space by client, business, type, etc.
- Streamlined process for creating spaces
- A simple way to add and maintain custom space configurations (template spaces)
- Better workflow for moving spaces to and from archived status.
- Workflow for reporting on status and needs from the project manager to the portfolio manager.
- New summary report tab on spaces.
- Improved API for external reporting
- Rename “Projects” to “Spaces”: We found that when our customers use the word “project” they are often not referring to a single Workspace. Usually they mean deliverables, which they show as specific milestones inside a space. Sometimes they are speaking of large projects that span multiple spaces and teams. In order to reduce confusion, we are removing the word “projects” from our spaces tab.
Deliver what you committed to, on time:
- Top-level dashboard to show the status of upcoming milestones.
- Upgrade the milestone views to make it easier to see what has been planned, what has been finished, whether there are any obstacles. Add optional budget and due date information. Add links to reports and cardwalls that will show the state of that milestone.
- Add reporting about status and obstacles to each milestone.
- Upgrade the milestone calendars that show upcoming milestones for a space or set of spaces.
- Add new List and Timeline (Gantt) views of the milestone calendar.
- Swim lanes to show the progress of epics and stories within each milestone.
- Discussion and cardwall for planning deliverables. We have found that some of our customers have created special ‘proposal spaces’ where they can discuss, plan, and budget upcoming deliverables. They use the ticket discussion threads, and the cardwall for showing the planning and delivery process. We will add special views for portfolio-level Kanban boards and discussions.
- Hosted customers can use SAML login (released recently), which allows them to centralize their user list, passwords, and permissions into their own SAML server.
- Private installations give larger customers complete control over their security environment. We have updated the private install and simplified installation and upgrades.
Importing and linking will make it easy to:
- Keep your assets together for future maintenance and improvement, even if they come from teams, suppliers, and clients working on multiple systems.
- Track deliverables in all your projects.
For more control over these upgrades, please contact us in advance to become part of our customer advisory group. Send your comments, questions, and call requests to email@example.com.
The first three parts of ISO 29119 were released in 2013. I was very skeptic, but also interested, so I grabbed an opportunity to teach the basics of the standard, so it would cover the costs of the standard.
I read it properly, and although I am biased against the standard I did a benevolent start, and blogged about it a year ago, http://thetesteye.com/blog/2013/11/iso-29119-a-benevolent-start/
I have not used the standard for real, I think that would be irresponsible and the reasons should be apparent from the following critique. But I have done exercises using the standard and had discussions about the content, and used most of what is included at one time or another.
Here are some scattered thoughts on the content.
I don’t believe the content of the standard matches software testing in reality. It suffers from the same main problem as the ISTQB syllabus: it seems to view testing as a manufacturing discipline, without any focus on the skills and judgments involved in figuring out what is important, observing carefully in diverse ways, and reporting results appropriately. It puts focus on planning, monitoring and control; and not about what is being tested, and how the provided information brings value. It gives an impression that testing follows a straight line, but the reality I have been in is much more complicated and messy.
Examples: Test strategy and test plan is so chopped up that it is difficult to do something good with it. Using the document templates will probably give the same tendency as following IEEE 829 documentation: You have a document with many sections that looks good to non-testers, but doesn’t say anything about the most important things (what are you trying to test, and how.)
For such an important area as “test basis” – the information sources you use – they only include specifications and “undocumented understanding”, where they could have mentioned things like capabilities, failure modes, models, data, surroundings, white box, product history, rumors, actual software, technologies, competitors, purpose, business objectives, product image, business knowledge, legal aspects, creative ideas, internal collections, you, project background, information objectives, project risks, test artifacts, debt, conversations, context analysis, many deliverables, tools, quality characteristics, product fears, usage scenarios, field information, users, public collections, standards, references, searching.
The standard includes many documentation things and rules that are reasonable in some situations, but often will be just a waste of time. Good, useful documentation is good and useful, but following the standard will lead to documentation for its own sake.
Examples: If you realize you want to change your test strategy or plan, you need to go back in the process chain and redo all steps, including approvals (I hope most testers adjust often to reality, and only communicate major changes in conversation.)
It is not enough with Test Design Specification and Test Cases, they have also added a Test Procedure step, where you in advance write down in which order you will run the test cases. I wonder which organizations really want to read and approve all of these… (They do allow exploratory testing, but beware that the charter should be documented and approved first.)
A purpose of the standard is that testing should be better. I can’t really say that this is the case or not, but with all the paper work there are a lot of opportunity cost, time that could have been spent on testing. On the other hand, this might be somewhat accounted for by approvals from stakeholders.
At the same time, I could imagine a more flexible standard that would have much better chances of encouraging better testing. A standard that could ask questions like “Have you really not changed your test strategy as the project evolved?” A standard that would encourage the skills and judgment involved in testing.
The biggest risk with the standard is that it will lead to less testing, because you don’t want to go through all steps required.
It is apparent that they really tried to bend in Agile in the standard. The sequentiality in the standard makes this very unrealistic in reality.
But they do allow bug reports not being documented, which probably is covered by allowing partial compliance with ISO 29119 (this is unclear though, together with students I could not be certain what actually was needed in order to follow the standard with regards to incident reporting.)
The whole aura of the standard don’t fit the agile mindset.
There is a momentum right now against the standard, including a petition to stop it http://www.ipetitions.com/petition/stop29119 which I have signed.
I think you should make up your own mind and consider signing it; it might help if the standard starts being used.
Stuart Reid, ISO/IEC/IEEE 29119 The New International Software Testing Standards, http://www.bcs.org/upload/pdf/sreid-120913.pdf
Rikard Edgren, ISO 29119 – a benevolent start, http://thetesteye.com/blog/2013/11/iso-29119-a-benevolent-start/
ISO 29119 web site, http://www.softwaretestingstandard.org/
I used TechSmith’s Snagit before I started working here. I was creating simple screen captures with annotations for my test documentation and reporting defects. The more I used Snagit, the more it became a part of my daily workflow. I discovered that many testers are doing just what I did — using Snagit for those simple screen capture tasks. But it’s far more powerful than that. And the robust features in Snagit are often overlooked because testers find lots of value in the capture experience alone.
To better understand the features that testers love most about Snagit, I turned to our testers here at TechSmith. Who better to give advice on Snagit features than the testers that help make it! Here are the top features of Snagit our testers use to make their work shine.
Video in Snagit? Yep, it’s in there, but you might be wondering why you would want to use it. It can be difficult to describe the complex behaviors of software solely through text. Capturing video of a defect or anomaly in action is a far more powerful demonstration. With video, you can describe the behavior prior to and following an anomaly. Essentially, you’re narrating the defect. And video is extremely helpful when working with remote testers or developers.
To capture a video, simply activate a capture and select the video button:
Snagit will record full screen or a partial selection of your screen. When you’ve finished capturing, you can trim the video in the Snagit editor and share it using your favorite output. Speaking of sharing…
You can save captures as images in a variety of formats, but did you know about the many outputs for sharing your content from the Snagit Editor? Get your images and videos where they need to go using Output Accessories. From the Share menu, you can output captures to many places including Email, FTP, the Clipboard, MS Office programs, our very own Camtasia and Screencast.com, YouTube, and Google Drive. The complete list of available outputs can be found from the Snagit Accessories Manager on the Share menu:
Additional places to share your captures to include Twitter, Facebook, Evernote, Skype, and Flickr.
Profiles allow users to set up a workflow for their captures. Workflows make it more efficient by configuring a capture type and sharing locations. Profiles are often used by testers for repetitive testing processes, such as creating test documentation, recording test execution artifacts, and capturing defects. An example of using a profile would be sending an image capture to the Snagit editor for a quick annotation and then directly to Microsoft Word by Finishing the profile:
Or you can even bypass the editor altogether if you want your images to go to your selected output without annotations. Learn more about profiles.
Mobile Capture with TechSmith Fuse
Are you testing a mobile application and need to get images of those bugs over to a developer ASAP? Rather than messing with email, just Fuse it! TechSmith Fuse is a free mobile application that lets you capture images or video on your mobile device (iOS, Android, or Windows), upload them directly to the Snagit Editor through your wireless network, and then enhance your content using Snagit’s many editing tools.
Sharing Your Content
Screencast.com is both a repository for your image and video content as well as a place to conveniently share it with others. Your image and video content can be sent from Snagit and shared privately or publicly. Best of all, you can start storing and sharing your content with a free account that comes with 2GB of storage space.
There you have it — some key features to you need to know to get the most out of Snagit. Happy capturing!
Recent months have seen a number of leading commentators offer serious criticism of the open source software movement as a whole, suggesting that these efforts are either doomed to fail or not worth the investment. Yet according to InfoWorld contributor Matt Asay, such criticism is often misguided, demonstrating a lack of understanding as to what lies at the heart of open source software efforts.
Criticizing the critics
Asay noted that two of the most recent attacks against open source efforts came from The New York Times' Quentin Hardy and fellow InfoWorld contributor Galen Gruman. In the latter case, Asay argued that Gruman rightfully criticizes many recent open source mobile failures. However, he fails to acknowledge that Android is, in fact, a mobile open source operating system and is also the most successful mobile OS in the world. Instead, Gruman maintained that because Android's development is primarily conducted by Google, rather than the open source community, it is somehow inherently distinct from true, traditional open source projects.
Asay explained that the reality of the situation is that the vast majority of open source projects currently take this form. OpenStack, Linux and many others originate with major companies before being offered to the open source community at large. The writer called the notion of open source projects springing forth from organic, communal, selfless developers a "mythical (and mostly false)" understanding.
Hardy, on the other hand, focused his critique on what he saw as the commercial failure of open source. Asay countered by noting that many major companies are now pulling in tremendous revenue thanks to open source software. The key point is that rather than trying to make open source directly profitable, firms are selling services and software that complement open source offerings.
The new standard
Perhaps most importantly, Asay argued that open source is, simply put, the standard method of software development now. As Mike Olson, co-founder of Cloudera, recently pointed out, companies really have no choice but to embrace open source.
"You can no longer win with a closed source platform, and you can't build a successful stand-alone company purely on open source," said Olson, the news source reported.
As a result, open source increasingly represents the standard for businesses, rather than a risky commercial venture.
Further evidence of this trend can be seen in the creation and funding of the Core Infrastructure Initiative. The CII was created to identify open source projects in need of funding to remain operational. As International Business Times contributor Joram Borenstein reported, this organization received a tremendous amount of funding from major tech-focused companies, including Google, Facebook and Microsoft. Borenstein explained that these organizations likely invested money into CII because they acknowledged the degree to which they depend on open source software. They have a financial interest in ensuring these projects remain secure and operational.
The commercialization of open source software should not be seen as controversial. On the contrary, it is well established and likely to grow further.
Many years ago I took a management class. One of the exercises we did was on achieving consensus. My group did not reach an agreement because I wouldn’t lower my standards. I wanted to discuss the matter further, but the other guys grew tired of arguing with me and declared “consensus” over my objections. This befuddled me, at first. The whole point of the exercise was to reach a common decision, and we had failed, by definition, to do that– so why declare consensus at all? It’s like getting checkmated in chess and then declaring that, well, you still won the part of the game that you cared about… the part before the checkmate.
Later I realized this is not so bizarre. What they had effectively done is ostracize me from the team. They had changed the players in the game. The remaining team did come to consensus. In the years since, I have found that changing the boundaries or membership of a community is indeed an important pillar of consensus building. I have used this tactic many times to avoid unhelpful debate. It is one reason why I say that I’m a member of the Context-Driven School of Testing. My school does not represent all schools, and the other schools do not represent mine. Therefore, we don’t need consensus with them.
Then what about ISO 29119?
The ISO organization claims to have a new standard for software testing. But ISO 29119 is not a standard for testing. It cannot be a standard for testing.
A standard for testing would have to reflect the values and practices of the world community of testers. Yet, the concerns of the Context-Driven School of thought, which has been in development for at least 15 years have been ignored and our values shredded by this so-called standard and the process used to create it. They have done this by excluding us. There are two organizations explicitly devoted to Context-Driven values (AST and ISST) and our community holds several major conferences a year. Members of our community speak at all the major practitioners conferences, and our ideas are widely cited. Some of the most famous testers in the the world, including me, are Context-Driven testers. We exist, and together with the Agilists, we are the source of nearly every new idea in testing in the last decade.
The reason they have excluded us is that they know we won’t agree to any simplistic standard based on templates or simple formulae. We know those things look pretty but they don’t help. If ISO doesn’t exclude us, they worry they will never finish. They know we will challenge their evidence, and even their ethics and basic competence. This is why I say the craft is not ready for standards. It will be years before all the recognized experts in testing can come together and agree on anything substantial.
The people running the ISO effort know exactly who we are. I personally have had multiple public debates with Stuart Reid, on stage. He cannot pretend we don’t exist. He cannot pretend we are some sort of lunatic fringe. Tens of thousands of testers have watched my video lectures or bought my books. This is not a case where ISO can simply declare us to be outsiders.
The Burden of Proof
The Context-Driven community stands for excellence in testing. This is why we must reject this depraved attempt by ISO to grab power and assert control over our craft. Our craft is still an open marketplace of ideas, and it is full of strong debates. We must protect that marketplace and allow it to evolve. I want the fair chance to put my competitors out of business (or get them to change their business) with the high quality of my work. Context-Driven testing has been growing in strength and numbers over the years. Whereas this ISO effort appears to be a job protection program for people who can’t stomach debate. They can’t win the debate so they want to remake the rules.
The burden of proof is not on me or any of us to show that the standard is wrong, nor is it our job to make it right. The burden is on those who claim that the craft can be standardized to study the craft and recognize and resolve the deep differences among us. Failing that, there can be no ethical or rational basis for standardization.
This blog post puts me on record as opposing the ISO 29119 standard. Together with my colleagues, we constitute a determined and sustained and principled opposition.
Apple has always prided itself on a sleak, sexy, streamlined experience. Moreover, this is one same experience that the user on his iPhone 4 in the United States may very well be sharing with that iPhone 4 in India.
Now take a look at Android. He’s kind of the sloppy guy at the wedding that decided to wear shorts and sandals. But this operating system of the Big Two has always embraced this different and defiant but sloppy lifestyle, with a customized experience on each device that’s as unique as a snowflake.
However, as of late, Android has recently taken this very un-Apple business model to an extreme. According to PC Magazine, there are now approximately 18,796 unique Android devices in-the-wild. And this number has jumped a whopping 60% in just one year from just over 11,000.
So with this proliferation of Android devices floating around, has the experience for Android testers and developers become that much more of a horror show full of challenges? We’d like to hear from you in the Comments below.
When I’m not testing, one of my favorite hobbies is alcohol. Wait…that didn’t come out right. What I meant was my hobby is learning about wine, beer and sprits. Yeah, that sounds better.
While I do love a cold beer in the summer, a single-malt scotch when I’m feeling sophisticated, or an 1855 classified Bordeaux on special occasions, I think I spend more time studying booze than I do drinking it. I really enjoy learning about the various Appellation d’origine contrôlée (AOCs) in France, and the differences between a Pinot Noir from California and one from Burgundy. I sound pretty smart, huh?
As any cultured, refined wine connoisseur such as myself knows, the true masters of the bottle are called sommeliers. These fine folks are highly trained adult beverage experts who often work in fancy, fine-dining restaurants, setting the wine list, caring for the cellar and working with customers to help them select the perfect wine.
So what could a tester possibly learn from a someone obsessed with booze? Good question! I have three answers.
I have yet to find people who are more passionate about what they do than Master Sommeliers. Need proof? Watch the movie Somm (available on Netflix). The tremendous amount of dedication and effort these people pour (wink, wink) into their work is simply astounding.
A sommelier must be constantly learning and exploring. Each year, a new vintage of every wine is created. That means thousands of new wines are added to the multitude that already exist…and a sommelier is expected to be familiar with with all of them. And you thought the IT world was constantly changing!
There will always be a new product to test, a new approach to learn, a new idea to debate. Testers who are passionate about testing are excited about these new developments as they are opportunities to grow and improve.
Be a servant
From the Demeanor of the Professional Sommelier:
It is important for sommeliers to put themselves in the role of a server; no job or task on the floor is beneath the role of a sommelier; he or she does whatever needs to be done in the moment to take care of the guest.
Sommeliers are at the top of the service food chain. They are highly trained, knowledgeable, and professional people, yet they are also the most humble servants. They realize that their qualities alone are worthless. They must be used in the service of a customer for their value to be realized.
Testers too need to remember that we don’t get paid because we think we’re great testers. We get paid because the person who is paying us values our work. Put your client’s perception of value and quality ahead of your own.
Be an expert
A sommelier would never walk up to your table and say, “I recommend this bottle I found in the back.” They are the ultimate experts in wine selection and food pairing. He or she asks questions: What do you like? What have you had before? What is your budget? They learn about the customer and use that information to help them find exactly the right bottle.
Likewise, a tester should be knowledgeable in the testing field. A good tester doesn’t just randomly go banging on things — they too take a more thoughtful approach. What areas are the most error prone? What parts of the product are the most important? What information would be most useful at this stage in the product’s life cycle? They learn about the product and the circumstances under which they are testing to ensure they provide the most value possible.
Take pride in your work. Understand that testing is not a low-skilled job; it is a highly cognitive profession with demands on your professionalism, communication skills, and attention to detail. It takes a lot of effort, study and experience to become an expert (or so I’m told), but that should be the goal of every tester.
A Gold-rated tester and Enterprise Test Team Lead (TTL) at uTest, Lucas Dargis has been an invaluable fixture in the uTest Community for 2 1/2 years, mentoring hundreds of testers and championing them to become better testers. As a software consultant, Lucas has also led the testing efforts of mission-critical and flagship projects for several global companies. You can visit him at his personal blog and website.
The Homeland Security Agency is primarily dedicated to protecting the United States from external threats. While these efforts have typically centered on the physical realm, now the DHS is turning its attention to the digital realm. As ZDNet contributor Steve J. Vaughan-Nichols recently highlighted, the DHS now offers a service specifically designed to help organizations examine open source software code for potential security threats.
Open source verification
The new service, announced during OSCon, is called the Software Assurance Marketplace and known as SWAMP. As Patrick Beyer, project manager for SWAMP at the Morgridge Institute for Research, explained to the news source, the purpose of this project is to ensure that government agencies remain safe and secure when leveraging open source solutions.
"With open source's popularity, more and more government branches are using open-source code. Some are grabbing code from here, there and everywhere," Beyer explained, the source reported. "We're the one place you can go to check into the code."
The program is funded by a $23.4 million grant from the Department of Homeland Security Science & Technology Directorate, Vaughan-Nichols explained. It was designed by researchers from several schools, including the University of Indiana and the University of Wisconsin-Madison. The writer explained that the researchers involved in the initiative bring expertise in a number of fields, such as national distributed facilities and identity management.
Static analysis tools
As the writer explained, SWAMP relies on static code analysis tools to examine open source software for potential security vulnerabilities. With these solutions, users can conduct scans without the need to actually execute the problems in question.
"These static analysis tools review program code and search for application coding flaws, unintentional or intentional, that could give hackers access to critical company data or customer information," SWAMP explained, the source reported
SWAMP also provides users with nearly 400 open source software packages designed to allow developers to improve their software projects, Nichols-Vaughan noted.
Beyer emphasized that SWAMP users will not need to worry about potential privacy concerns.
"All SWAMP activities performed by users are kept completely confidential," Beyer said, according to the news source. "The only one who sees your code are you and the SWAMP system administrators. In no way does testing your programs on SWAMP give the government any access, control or rights to your programs."
Open source in the public sector
As Beyer noted, open source solutions are becoming increasingly popular for public sector organizations at every level and around the world. Among the key reasons for this trend is open source's superior flexibility and potential for cost-savings, as Government Computing highlighted.
According to the source, government agencies now realize that open source tools provide a greater degree of control over how the software is implemented and utilized. Furthermore, many proprietary software providers require aggressive, inflexible contracts, a fact which is turning even more public sector organizations toward open source options.