Trying to come up with gift ideas for those hard-to-shop-for people on your holiday list? Maybe some of Seapine’s customers can help!For the Gamers
Intuos Tablet Wacom is the leading manufacturer of pen tablets, stylus, and interactive pen displays that let the artists on your list express their creativity as fluently in the digital world as they would with ink on paper or paint on canvas. ($99)
Momentum Headphones For the music lovers on your list, nothing sounds better than Sennheiser. They’ll rock out in style with a pair of Momentum headphones, which WIRED called “more than just good-looking, they’re downright sexy.” ($270)
Olaf’s in Trouble Olaf’s in Trouble is a Frozen version of the classic Trouble game by Hasbro. If your kids love Frozen, they’ll have a blast playing this game as their favorite Frozen character, traveling around Arendelle to save Olaf. ($15)
Nerf N-Strike Elite Demolisher 2-in-1 Blaster For the bigger kids on your list, get them the newest in Nerf firepower. They’ll dominate their next Nerf war with motorized dart firing and missiles. ($40)For the Ones with Everything
[+] Trip Universal Air Vent Mount Logitech makes fantastic gadgets, and one of our favorites is the [+] Trip smartphone mount for the car. It’s perfect for gadget lovers, no matter what phone they have. ($30)
Braun Series 7 790cc Shaver “Movember” is coming to an end, so the bearded ones on your list might soon need a new shaver. Braun makes the best. ($270)
Davids Tea Festive Collection Our sales and support teams enjoy Davids Tea so much, they’ve taken to holding “high tea” every day. If you’ve got a tea lover on your list, they’re sure to enjoy the Festive Collection. ($50)
Sonic Drive-In Gift Card Sonic Drive-In has great food and desserts, so everyone will appreciate a Sonic gift card! (prices vary)Need more ideas?
If you still haven’t checked off everyone on your list, visit Conn’s Home Plus for ideas and great deals on electronics, computers, appliances, furniture, and more.
Happy shopping and Happy Holidays from Seapine Software!Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon
To read more, visit our blog at blog.sonatype.com.
Last week, uTest launched two new Platform features for uTesters on paid projects which continue to drive the needle in our continuous pursuit of quality (plus a very useful change to existing tester dashboard functionality). Here’s a recap of what is included in the latest uTest Platform release.Bug Report Integrity
Most testers understand the role of a bug report is to provide information. However, a “good” or valuable bug report takes that a step further and provides useful and actionable information in an efficient way. As such, in addition to approving tester issues, Test Team Leads (TTLs) and Project Managers (PMs) have the ability to rate the integrity of a tester’s bug report by setting the bug report integrity to High, Unrated or Low. However, by default, all bugs will be set to Unrated.
The Bug Report Integrity feature will reward testers who meet a high report integrity standard by providing a positive rating impact to the quality sub-rating. Conversely, we will also seek to educate testers who may be missing the mark by negating any positive impact that may have occurred based on the value of the bug itself.
For more information, please review the Bug Report Integrity uTest University course.Tester Scorecard
When navigating into a test cycle, you will see a new tab called “Tester Scorecard.” Clicking this tab will bring up a ranked list of testers based on their bug submissions and the final decisions on these bugs — i.e. approvals and rejections.
Points are awarded according to the explanation at the top of the Scorecard and result in a score that is used to rank testers based on their performance. Sorting the table by any of the columns is possible. If two testers have identical scores (i.e. same number of bugs approved at the same value tiers), the tester that started reporting bugs first will be first in the ranking with same point scores.
Our hope is that this Scorecard will spark some additional competition among top performers and will also be useful for PMs and TTLs to choose testers for participation bonuses. Of course, it is still at the discretion of the TTL or PM to decide who won any bug battles or is eligible for any bonus payments.
Note: Scores indicated on the scorecard do not impact the tester’s rating.
Additionally, there was an improvement to existing functionality within the tester dashboard. Pending payouts are now included so that testers can easily see how much they have earned:
If you like what you see, feel free to leave your comments below, or share your ideas on these and other recent platform updates by visiting the uTest Forums. We’d love to hear your suggestions, and frequently share this valuable feedback with our development team for future platform iterations!
We are working with a lot of performance engineers that have Tibco Business Works (BW) in the mix of technologies they are responsible for. This particular story comes from A. Alam – a performance engineer who is responsible for a large enterprise application that uses Tibco to connect their different system components. Alam and his […]
The post Finding and Fixing Memory Leaks in Tibco Business Works appeared first on Dynatrace APM Blog.
Upper management actually asked me to share my TDD experience as well & so I just published an article internally to our Embedded Software newsletter describing how TDD helped my project. Here’s the summary from that article (I think the dates really say it all):
My doubts that TDD could be used for an embedded application with an emphasis on external peripherals have been eliminated, and I have found the time invested in writing tests and mocks to be well worth it.
I find it compelling that
- I required only 4 days of actual hardware testing before achieving my integration goal and that goal came essentially 2 months ahead of schedule.
- For the past 5 months, since May, I have not used the in-system Debugger at all and instead rely on TDD to minimize the introduction of bugs in the first place.
Based on my experience, I found TDD to be a positive feedback exercise – passing my first tests & catching bugs immediately, encouraged me to write more tests, which lead to more successful results until I now have a high level of code coverage and a handy set of regression tests. (And since I wasn’t frantically debugging in the lab, I had enough time to write this article!)
Thanks, Name Withheld
David Oreol has been a uTester since the very beginning, and is a full-time Test Team Lead Premier and Gold-rated tester on paid projects at uTest. Before joining the community, David earned a B.S. in Computer Science from California State University Fresno and worked in IT and as a software engineer.
Be sure to also follow David’s profile on uTest as well so you can stay up to date with his activity in the community!
uTest: Android or iOS?
David: For work, both. I like testing on both environments, but for personal use, it is iOS and Mac all the way. I like the ease of use and integration between the mobile and desktop platforms. I don’t like having to constantly tweak my phone or computer to get it to work. I used to be a die-hard Windows fan, but I switched to Mac a few years ago and haven’t looked back.
uTest: What drew you into testing initially? What’s kept you at it?
David: I’ve always been one to sign up for beta testing of apps I use, so it was a natural fit. I have a degree in Software Engineering as well, so that certainly helps out. What’s kept me going is the variety of products. I’ve tested everything from hardware devices to websites to Mac and PC apps to iOS and Android apps. Many of the products I have gotten to test weren’t available to the public yet. Seeing something that I tested out in the wild is a big thrill for me, even if I can’t tell anyone that I worked on it.
uTest: What’s your go-to gadget?
David: For work, my new iPhone 6 Plus. I’m finding some interesting bugs with it since it has the larger screen and the new wider landscape layout. For relaxing, I love my Kindle Paperwhite. The e-ink screen is so much easier on my eyes than a traditional backlit screen. I think that everyone that reads a lot should own an e-ink reader.
uTest: What is the one tool you use as a tester that you couldn’t live without?
David: My 27” iMac. The large screen really helps with big spreadsheets for work. Additionally, OS X has built in virtual desktops that are super easy to use. I normally run 7 desktops with different browsers and tools on each one. It’s almost like having multiple monitors, but without taking up all my desk space.
uTest: What keeps you busy outside testing?
David: Lately, I’ve been running and walking a lot. I enjoy the time away from the computer. Otherwise, I spend most of my time with my wife and playing with our ferrets. We also really enjoy hiking and tent camping.
You can also check out all of the past entries in our Meet the uTesters series.
A leader will find themselves choosing between two solutions or two situations that compete against each other. A leader successfully “rides the paradox” when they adopt an “AND” mindset, instead of an “OR” mindset. Instead of choosing one solution over another, they find a way to satisfy both situations, even though they contradict one another.
A common Tech Lead paradox is the case of Delivering versus Learning.The case for delivering
In the commercial of software development, there will always be pressure to deliver software that satisfy user needs. Without paying customers, companies cannot pay their employees. The more software meets user needs, the more a company earns, and the more the company can invest in itself.
Business people will always be asking for more software changes as there is no way of knowing if certain features really do meet user needs. Business people do not understand (and cannot be expected to fully understand) what technical infrastructure is needed to deliver features faster or more effectively. As such, they will always put pressure on to deliver software faster.
From a purely money-making point of view, it is easy to interpret delivering software as the way of generating more earnings.The case for learning
Software is inherently complex. Technology constantly changes. The problem domain shifts as competitors release new offerings and customer needs change in response and evolve through constant usage. People, who have certain skills, leave a company and new people, who have different skills, join. Finding the right balance of skills to match the current set of problems is a constant challenge.
From a technologist’s point of view, learning about different technologies can help solve problems better. Learning about completely different technologies opens up new opportunities that may lead to new product offerings. But learning takes time.The conflict
For developers to do their job most effectively, they need time to learn new technologies, and to improve their own skills. At the same time, if they spend too much time learning, they cannot deliver enough to help a company to reach their goals, and the company may not earn enough money to compensate their employees and in turn, developers.
Encouraging learning at the cost of delivering also potentially leads to technology for technology’s sake – where developers use technology to deliver something. But what they deliver may not solve user needs, and the whole company suffers as a result.What does a Tech Lead do?
A Tech Lead needs to keep a constant balance between finding time to learn, and delivering the right thing effectively. It will often be easier for a Tech Lead to succumb to the pressure of delivering over learning. Below is advice for how you can keep a better balance between the two.Champion for some time to learn
Google made famous their 20% time for developers. Although not consistently implemented across the entire organisation, the idea has been adopted by several other companies to give developers some creative freedom. 20% is not the only way. Hack days, like Atlassian’s ShipIt days (renamed from FedEx days) also set aside some explicit, focused time to allow developers to learn and play.Champion learning that addresses user needs
Internally run Hack Days encourage developers to unleash their own ideas on user needs, where they get to apply their own creativity, and often learn something in the process. They often get to play with technologies and tools they do not use during their normal week, but the outcome is often focused on a “user need” basis, with more business investment (i.e. time) going towards a solution that makes business sense – and not just technology for the sake of technology.Capture lessons learned
In large development teams, the same lesson could be learned by different people at different times. This often means duplicated effort that could have been spent learning different or new things. A Tech Lead can encourage team members to share what they have learned with other team members to spread the lessons.
Some possibilities I have experienced include:
- Running regular learning “show-and-tell” sessions – Where team members run a series of lightning talks or code walkthroughs around problems recently encountered and how they went about solving it.
- Update a FAQ page on a wiki – Allows team members to share “how to do common tasks” that are applicable in their own environment.
- Share bookmark lists – Teams create a list of links that interesting reads based on problems they have encountered.
A Tech Lead can demonstrate their support for a learning environment but encouraging everyone to be a student and a teacher at the same time. Most team members will have different interests and strengths, and a Tech Lead can encourage members to share what they have. Encouraging team members to run brown bag sessions on topics that enthuse them encourage an atmosphere of sharing.Weekly reading list
I know of a few Tech Leads who send a weekly email with interesting reading links to a wide variety of technology-related topics. Although they do not expect everyone to read every link, each one is hopeful that one of those links will be read by someone on their team.
We love the brave .NET pioneers out there that have taken the first steps towards something new and amazing. Ranging from a new language, tool or building a community, these two .NET programmers aren’t scared to try explore and discover wonderful new things.Chris Dengler
Chris is currently the Founder/CEO & CTO of Right Arm Development. He was formerly a Senior Software Design Engineer with Microsoft Corporation and one of the two original architects and developers of what became known as Web Services / SOAP, introduced in .NET, now being used on billions of devices worldwide. Chris’ work is found running in many large corporations and infrastructures commonplace across all industry segments, such as Sony, Verizon, Volkswagen, Trans World Entertainment Corporation, Honeywell, US Airways, Costco, PetSmart, and American Express to name just a few. Chris also created the architecture for Microsoft’s internal “Idea Generation Tool” called “Greenhouse”. The Greenhouse idea was delivered directly to Steve Ballmer and then cultivated through various teams based upon Chris’ architecture and idea. Chris continues to impress. Keep up with him on twitter @ChrisDengler.Mahesh Chand
Mahesh is founder of C# Corner. If you are unfamiliar, C# Corner is a free member contributions based open platform for developers to solve problems, learn new technology and hang out. Mahesh has been awarded the prestigious Microsoft MVP Award for 9 consecutive years for his contributions to the developer community. Mahesh is also an author of several programming books. Mahesh authored and published his first book A Programmer’s Guide to ADO.NET in C# with APress at the age of 25. Since then, Mahesh went on to author several more .NET programming books. He is not slowing down anytime soon. Keep up to date on his latest endeavors on twitter @mcbeniwal.
After analyzing the results of this year’s State of Medical Device Development Survey, we identified three key challenge areas within the industry: managing risk, working with documents, and overcoming barriers to improvement.
In a previous blog post, we looked at what the survey revealed about managing risk, and how TestTrack can help with that important facet of medical device development. In this post, we’re going to look at the challenges of working with documents.
When asked to identify their most time-consuming tasks, respondents put “documenting work” and “reviewing documentation” at the top of the list.
If you had a choice between spending your time on refining the product and getting it to market faster, or managing compliance documentation, which would you choose? You’d choose to work on the product, right?
The problem is, getting new devices to market depends on proving that you’ve complied with all applicable regulations. In order to do that, you have to have your documentation in order.
Many companies attempt to meet this challenge by sharing the reports, traceability matrices, and other necessary documentation on a network or in a document control system. As we previously pointed out, this is bad for risk management. It’s also a major productivity killer.
Development teams lose valuable time as they struggle to manually manage these documents. That’s why many companies are moving away from document-centric systems. The industry made a huge leap forward in this area from 2011 to 2013.
Although that progress seemed to stagnate this year, we’re confident the industry will continue to move toward artifact-centric systems as the productivity benefits become more widely recognized.Improving Traceability
Possibly the biggest timesaver delivered by an artifact-centric solution like TestTrack is in leveraging traceability.
Teams using a document-centric approach spend an inordinate amount of time digging through documents to ensure accurate traceability from design through code and testing. Hours, days, or even weeks can be lost to maintaining the trace matrix.
Nearly half of survey respondents reported wasting nearly a day or more each time they need to update the traceability matrix.
When maintaining traceability takes that much time, you’re forced to chose between creating the trace matrix early and adding massive time to the schedule, or creating it late in the process and wasting less time—but also losing valuable traceability data. Only 19% said they create the traceability matrix at the very beginning of the development process; 13% said they wait until right before submission.
TestTrack allows you to focus on working with individual project assets or artifacts:
- User stories
- Release planning
- Work items
- Test cases
- Defect resolutions
- Risk controls and analyses
These artifacts can be sent out for review to only the people responsible for each piece, with TestTrack centralizing their changes. User A will see user B’s changes in real time and can adjust their updates and feedback accordingly, eliminating the need to merge changes.
An artifact-centric approach can also easily support various development methodologies—spiral, iterative, parallel, Agile, Waterfall, and other hybrid alternatives.
To regain the lost productivity, medical device development teams need to get out from under the burden of document-centric systems. Migrating to an artifact-centric approach with TestTrack allows for much better data, risk, gap, and impact analyses. With TestTrack, users can focus on tasks instead of constantly reviewing and updating documents.Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon
This new book by Gojko Adzic and David Evans is deceptively slim. It’s not just 50 ideas to improve your user stories. It’s 50 experiments you can try to improve how you deliver software. For each experiment, David and Gojko provide you with information and resources “to make it work”.
One chapter that has caught my eye is “Use Low-Tech for Story Conversations”. Gojko and David advise holding story discussions in rooms with lots of whiteboards and few big tables. When everyone sits at a big conference table, looking at stories on a monitor or projected on a wall, they start tuning out and reading their phones. Standing in front of a whiteboard or flip chart encourages conversation, and the ability to draw makes that conversation more clear. Participants can draw pictures, connect boxes with arrows, write sentences, make lists. It’s a great way to communicate.
I’ve always been fond of the “walking skeleton”, identifying the minimum stories that will deliver enough of a slice to get feedback and validate learning. Gojko and David take this idea even further, they put the walking skeleton on crutches. Deliver a user interface with as little as possible below the surface now, get feedback from users, and iterate to continually improve it. As with all the ideas in the book, the authors provide examples from their own experience to help you understand the concept well enough to try it out with your team.
David and Gojko understand you’re working in a real team, with corporate policies and constraints that govern what you can do. Each story idea ends with a practical “How to Make it Work” section so you can get your experiment started.
Again, it’s not just a book of tips for improving your user stories. It’s fifty ways to help your customers identify the business value they need, and deliver a thin slice of that value to get feedback and continue to build it to achieve business goals. It’s a catalog of proven practices that guides you in learning the ones you want to try.
unafraid of the difficult (to ask and often answer!) questions And he's not the only one. Questions are a tester's stock-in-trade, but what kinds of factors can make them difficult to ask? Here's some starters:
- the questions are hard to frame because the subject matter is hard to understand
- the questions have known answers, but none are attractive
- the questions don't have any known answers
- the questions are unlikely to have any answers
- the questions put the credibility of the questionee at risk
- the questions put the credibility of the questioner at risk
- the questions put the credibility of shared beliefs, plans or assumptions at risk
- the questions challenge someone further up the company hierarchy
- the questions are in a sensitive area - socially, personally, morally or otherwise
- the questions are outside the questioner's perceived area of concern or responsibility
- the questioner fears the answer
- the questioner fears that the question would reveal some information they would prefer hidden
- the questioner isn't sure who to ask the question of
- the questioner can see that others who could are not asking the question
- the questioner has found that questions of this type are not answered
- the questioner lacks credibility in the area of the question
- the questioner lacks confidence in their ability to question this area
- the questionee is expected not to want to answer the question
- the questionee is expected not to know the answer
- the questionee never answers questions
- the questionee responds negatively to questions (and the questioner)
- the questionee is likely interpret the question as implied criticism or lack of knowledge
- the answer will not satisfy the questioner, or someone they care about
- the answer is known but cannot be given
- the answer is known to be incorrect or deliberately misleading
- the answer is unknown
- the answer is unknown but some answer is required
- the answer is clearly insufficient
- the answer would expose something that the questionee would prefer hidden
- the answer to a related question could expose something the questionee would prefer hidden
- the questioner is difficult to satisfy
- the questionee doesn't understand the question
- the questionee doesn't understand the relevance of the question
- the questionee doesn't recognise that there is a question to answer
Because they'll make me think, suggest that I might reconsider, force me to understand what my point of view on something actually is. Because they expose contradictions and vagueness, throw light onto dark corners, open up new possibilities by suggesting that there may be answers other than those already thought of, or those that have been arrived at by not thinking.
Because they can start a dialog in an important place, one which is the crux of a problem or a symptom or a ramification of it.
Because the difficult questions are often the improving questions: maybe the thing being asked about is changed for the better as a result of the question, or our view of the thing becomes more nuanced or increased in resolution, or broader, or our knowledge about our knowledge of the thing becomes clearer.
And even though the answers are often difficult, I do my best to give them in as full, honest and timely a fashion as I can because I think that an environment where those questions can be asked safely and will be answered respectfully is one that is conducive to good work.
* And we haven't taken into account the questions that aren't asked because they are hard to know or the answers that are hard purely because of the effort that's required to discover them or how differences in context can change how questions are asked or answered, how the same questions can be asked in different ways, willful blindness, plausible deniability, behavioural models such as the Satir Interaction Model and so on.
Thanks to Josh Raine for his comments on an earlier draft of this post.
“The only type of testing that I can do is manual testing.”
“Test automation is very important, but I am too busy now to learn something new.”
“Test automation is useful, but I will learn it when I will need it.”
“I am interested in test automation, but I don’t know any programming and it will take a long time to learn it.”
“I want to learn test automation, but my employer does not have any training programs.”
Have you ever heard any of these stories? I have, and not only once, but many times, about test automation, load testing, and web service testing.
Most of the testers I know say in one way or another that they would like to learn more about their profession but, “not now, maybe later, when the conditions will be better, when they will need the new skills in their job, when their employer will pay for their training, when someone will train them for free, when they will be less busy, etc.” The list goes on.
People sometimes say the same things about fitness: I will do it tomorrow, I will do it when I will have more time, when I will need it, etc. I have certainly done this many times as well.
But why exactly are testers not interested in learning new skills? Actually, to take things a bit further, why are testers the least interested in upgrading their skills out of all people that work in IT? Can it be because testing is seen as an easy job that anyone can do? Or because there is still no formal education track for testing? Or because some testers could not do other IT jobs well and needed a way out? Because of complacency? Maybe because of affluence and a high standard of living? Or possibly because of the illusion that things that they did yesterday will be there for them forever?
Who knows. I certainly don’t. But it is something that I see a lot. And recently, I asked other people what they think about it: Are testers the IT people least interested in learning new things?
One of the people I asked is a development director for a large development company with thousands of developers and hundreds of testers. He hires lots of testers all the time. The other three I talked with are IT recruiters who know the IT market very well. They all agreed with my observation. And none of them had better answers than me.
What do you think?
Alex Siminiuc is a uTest Community member and Gold-rated tester and Test Team Lead on paid projects at uTest. He has also been testing software applications since 2005…and enjoys it a lot. He lives in Vancouver, BC, and blogs occasionally at test-able.blogspot.ca.
[I added this example in a later post]
There are lots of pieces of code that are embedded in places that make it very hard to test. Sometimes these bits are essential to the correct operation of your program and could have complex state machines, timeout conditions, error modes, and who knows what else. However, unfortunately, they are used in some subtle context such as a complex UI, an asynchronous callback, or other complex system. This makes it very hard to test them because you might have to induce the appropriate failures in system objects to do so. As a consequence these systems are often not very well tested, and if you bring up the lack of testing you are not likely to get a positive response.
It doesn’t have to be this way.
I offer below a simple recipe to allow any code, however complex, however awkwardly inserted into a larger system, to be tested for algorithmic correctness with unit tests.
Take all the code that you want to test and pull it out from the system in which it is being used so that it is in separate source files. You can build these into a .lib (C/C++) or a .dll (C#/VB/etc.) it doesn’t matter which. Do this in the simplest way possible and just replace the occurrences of the code in the original context with simple function calls to essentially the same code. This is just an “extract function” refactor which is always possible.
In the new library code, remove all uses of ambient authority and replace them with a capability that does exactly the same thing. More specifically, every place you see a call to the operating system replace it with a call to a method on an abstract class that takes the necessary parameters. If the calls always happen in some fixed patterns you can simplify the interface so that instead of being fully general like the OS it just does the patterns you need with the arguments you need. Simplifying is actually better and will make the next steps easier.
If you don’t want to add virtual function calls you can do the exact same thing with a generic or a template class using the capability as a template parameter.
If it makes sense to do so you can use more than one abstract class or template to group related things together.
Use the existing code to create one implementation of the abstract class that just does the same calls as before.
This step is also a mechanical process and the code should be working just as well as it ever did when you’re done. And since most systems use only very few OS features in any testable chunk the abstract should stay relatively small.
Take the implementation of the abstract class and pull it out of the new library and back into the original code base. Now the new library has no dependencies left. Everything it needs from the outside world is provided to it on a silver platter and it now knows nothing of its context. Again everything should still work.
Create a unit test that drives the new library by providing a mock version of the abstract class. You can now fake any OS condition, timeouts, synchronization, file system, network, anything. Even a system that uses complicated semaphores and/or internal state can be driven to all the hard-to-reach error conditions with relative ease. You should be able to reach every basic block of the code under test with unit tests.
In future, you can actually repeat these steps using the same “authority free” library merging in as many components as is reasonable so you don’t get a proliferation of testable libraries.
Use your code in the complex environment with confidence! Enjoy all the extra free time you will have now that you’re more productive and don’t have bizarre bugs to chase in production.
The Google Test Automation Conference (GTAC) is an annual test automation conference hosted by Google, bringing together engineers to discuss advances in test automation and the test engineering computer science field.
GTAC 2014 was recently held just a few weeks ago at Google’s Kirkland office (Washington State, US), and we’re happy to present video of talks and topics from both days of the conference.
Are you ready for HP Discover 2014 in Barcelona? I know I am! Check out this blog to learn more about the 'Must Attend Sessions' you need to sign up for now.