Skip to content

Feed aggregator

Things To Know Before Upgrading to Mac OS X Yosemite (10.10)

The Seapine View - 3 hours 13 min ago

Upgrading to Mac OS X Yosemite (10.10) may impact your Seapine product installation. There are a couple of issues to be aware of:

  • If you use Surround SCM PostgreSQL databases, the Surround SCM Server will not be able to connect to the databases after upgrading Mac OS X. During the upgrade, Mac OS X automatically deletes empty PostgreSQL folders that are required for the PostgreSQL server to run correctly. It’s easy to fix this issue. See this knowledgebase article.
  • If you use TestTrack Web, TestTrack Web Server Admin, SoloSubmit, or Seapine License Server Web Admin, users will not be able to log in to the clients after upgrading Mac OS X on the computer hosting the TestTrack Server or Seapine License Server. Mac OS X 10.10 uses Apache 2.4 and the required mod_cgi module is not enabled by default in this version. This one is easy to fix too. See this knowledgebase article.

We haven’t seen any impact on the TestTrack or Surround SCM Clients after upgrading to Mac OS X.

If you have any questions or need help, please contact Seapine Support.

Share on Technorati . del.icio.us . Digg . Reddit . Slashdot . Facebook . StumbleUpon

Categories: Companies

Data mining used to develop comprehensive disease database

Kloctalk - Klocwork - 13 hours 18 min ago

Despite tremendous progress in recent years, contagious diseases remain a worldwide challenge. The current Ebola outbreak in western Africa is a powerful reminder of the damage that these pathogens can cause.

In an effort to combat global disease outbreaks, researchers at the University of Liverpool are working to develop the world's most comprehensive disease database, Labmate Online reported. And to achieve this goal, the scientists are turning to data mining solutions.

Data mining diseases
The source explained that the Liverpool University Climate and Infectious Diseases of Animals team aims to describe and map the connections between diseases and their hosts. All of this information will go into the group's Enhanced Infectious Diseases database, known as EID2.

To develop EID2, the researchers are applying advanced data mining techniques to the massive amount of scientific literature and relevant information already existent in disparate databases, the source explained. A significant portion of this data exists in unstructured or semistructured states, which previously made it difficult to collect and utilize in a single, coherent database. By applying big data analytics tools combined with high performance computing, though, the researchers hope to create a useful resource for anyone studying these pathogens.

Complex matters
According to Labmate Online, the Liverpool researchers have and will continue to utilize the data accumulated in EID2 for a variety of purposes. For example, the scientists have worked to examine the history of different human and animal diseases, tracing their spread and development over many years.

Additionally, the research will prove invaluable for predicting the impact climate change will have on numerous diseases. With this insight, researchers can create maps that reveal where certain diseases are more likely to spring up, and where they are most likely to spread.

Finally, the EID2 data can help disease researchers better understand the often-complex relationships between human and animal carriers and hosts. Improved categorization in this area could lead scientists to discover previously hidden connections between pathogens, which in turn could lead to new avenues for cures and treatments.

Data mining health care
While the EID2 project focuses on global health trends, data mining is also being applied to health-related matters on a more granular basis.

For example, Bloomberg Businessweek reported last month that the Carolinas HealthCare hospital chain uses this technology to analyze patient credit card data. By doing so, the organization is able to identify those patients who are most likely to require treatment in the near future and then take preventative steps to minimize the risk. 

Michael Dulin, chief clinical officer for analytics and outcomes research at Carolinas HealthCare, told the news source that providers can gain a lot more insight into a patient's health by data mining consumer-related information than through a single appointment at the doctor's office. He stated that his organization aims to assign risk scores to patients and deliver this information to the relevant doctors and nurses. These care professionals can then decide if and when to reach out to the affected individuals to provide lifestyle recommendations or encourage a visit to the hospital if they are at risk.

As more hospitals, clinics, doctor's offices and research facilities pursue data mining strategies, it is important for decision-makers to ensure that the right tools are in place to support such efforts. For example, personnel will need access to comprehensive numerical libraries, which can provide reliable, embeddable algorithms that can be incorporated into the organization's applications easily and effectively. Without such assets, many data mining efforts will yield suboptimal results.

Categories: Companies

Authors in Testing Q&A: Dorothy Graham Talks ‘Experiences of Test Automation’

uTest - 13 hours 18 min ago

Dorothy (Dot) Graham has been in software testing for 40 years, and is co-author of four books, including two on test automation (with Mark DG-photoFewster).

She was programme chair for EuroSTAR twice and is a popular speaker at international conferences. Dot has been on the boards of publications, conferences and qualifications in software testing. She was awarded the European Excellence Award in Software Testing in 1999 and the first ISTQB Excellence Award in 2012. You can visit her at her website.

In this Q&A, uTest spoke with Dot about her experiences in automation, its misconceptions, and some of her favorite stories from her most recent book which she co-authored, ‘Experiences of Test Automation: Case Studies of Software Test Automation.’ Stay tuned at the end of the interview for chapter excerpt previews of the book, along with an exclusive discount code to purchase.

uTest: Could you tell us a little more about the path that brought you to automation?

Dorothy Graham: That’s easy – by accident! My first job was at Bell Labs and I was hired as a programmer (my degrees were in Maths, there weren’t many computer courses back in the 1970s). I was put into a testing team for a system that processed signals from hydrophones, and my job was to write test execution and comparison utilities (as they were called then, not tools).

My programs were written on punched cards in Fortran, and if we were lucky, we got more than one “turn-around” a day on the Univac 1108 mainframe (when the program was run and we got the results – sometimes “didn’t compile”). Things have certainly moved on a lot since then! However, I think I may have written one of the first “shelfware” tools, as I don’t think it was used again after I left (that taught me something about usability)!

uTest: There’s a lot of misconceptions out there amongst management that automation will be a cure-all to many things, including cost-cutting within testing teams. What is the biggest myth you’d want to dispel about test automation?

DG: The biggest misconception is that automated tests are the same as manual tests – they are not! Automated tests are programs that check something – the tool only runs what it has been programmed to run, and doesn’t do any thinking. This misconception leads to many mistakes in automation — for example, trying to automate all — and only — manual tests. Not all manual tests should be automated. See Mike Baxter et al’s chapter (25) in my Experiences book for a good checklist of what to automate.

This misconception also leads to the mistaken idea that tools replace testers (they don’t, they support testers!), not realizing that testing and automating require different skillsets, and not distinguishing good objectives for automation from objectives for testing (e.g. expecting automated regression tests to find lots of bugs). I could go on…

uTest: What are you looking for in an automation candidate that you wouldn’t be looking for in a QA or software tester?

DG: If you are looking for someone to design and construct the automation framework, then software design skills are a must, since the test execution tools are software programs. However, not everyone needs to have programming skills to use automation – every tester should be able to write and run automated tests, but they may need support from someone with those technical skills. But don’t expect a developer to necessarily be good at testing – testing skills are different than development skills.

uTest: You were the first Programme Chair for EuroSTAR, one of the biggest testing events in Europe, back in 1993, and repeated this in 2009. Could you talk about what that entailed and one of the most valuable things you gained out of EuroSTAR’s testing sessions or keynotes?

DG: My two experiences of being Programme Chair for EuroSTAR were very different! SQE in the US made it possible to take the major risk of putting on the very first testing conference in Europe, by financially underwriting the BCS SIGIST (Specialist Group In Software Testing). Organizing this in the days before email and the web was definitely a challenge!

In 2009, the EuroSTAR team, based in Galway, gave tremendous support; everything was organized so well. They were great in the major planning meeting with the Programme Committee, so we could concentrate on content, and they handled everything else. The worst part was having to say no to people who had submitted good abstracts!

I have heard many excellent keynotes and sessions over the years – it’s hard to choose. There are a couple that I found very valuable though: Lee Copeland’s talk on co-dependent behavior, and Isabel Evans’ talk about the parallels with horticulture. Interesting that they were both bringing insights into testing from outside of IT.

uTest: Your recent book deals with test automation actually at work in a wide variety of organizations and projects. Could you describe one of your favorite case studies of automation gone right (or wrong) from the book, and what you learned from the experience?

DG: Ah, that’s difficult – I have many favorites! Every case study in the book is a favorite in some way, and it was great to collect and critique the stories. The “Anecdotes” chapter contains lots of smaller stories, with many interesting and diverse lessons.

The most influential case study for me, which I didn’t realize at the time, was Seretta Gamba’s story of automating “through the back door.” When she read the rest of the book, she was inspired to put together the Test Automation Patterns, which we have now developed into a wiki. We hope this will continue to disseminate good advice about automation, and we are looking for more people to contribute their experiences of automation issues or using some of the patterns.

uTest has arranged for a special discount of 35% off the purchase of ‘Experiences of Test Automation: Case Studies of Software Test Automation’ here by entering the code SWTESTING at checkout (offer expires Dec. 31, 2014). 

Additionally, Dot has graciously provided the following exclusive chapter excerpts to preview: 

Categories: Companies

How to create effective adverts for recruiting software testers

The Social Tester - 18 hours 18 min ago

When recruiting software testers many hiring managers often look for the impossible candidate who can do everything. These people don’t exist yet many hiring managers continue to place job adverts that seek out these candidates.   What follows are 5 ways that will help you to create effective adverts for recruiting software testers When I … Read More →

The post How to create effective adverts for recruiting software testers appeared first on The Social Tester.

Categories: Blogs

The Two-Minute Open Source Risk Assessment

Sonatype Blog - Tue, 10/21/2014 - 23:15
In two minutes, we can show you if there are any open source risks within your Java application.  And it’s free. That’s right, at Sonatype, we could not be more in favor of the code reuse that occurs millions of times a day thanks to the availability of open source and third-party components.  At...

To read more, visit our blog at blog.sonatype.com.
Categories: Companies

TDD and Asychronous Behavior: Part 2

Sustainable Test-Driven Development - Tue, 10/21/2014 - 22:45
<!--[if !mso]> v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} <![endif]--> <!--[if gte mso 9]> <![endif]--><!--[if gte mso 9]> Normal 0 false false false false EN-US X-NONE X-NONE
Categories: Blogs

TDD and Asychronous Behavior: Part 1

Sustainable Test-Driven Development - Tue, 10/21/2014 - 22:29
<!--[if gte mso 9]> <![endif]--> In TDD we write tests as specifications where each of them is focused on a single behavior of the system.  A “good” test in TDD makes a single, unique distinction about the system.   But this means when the TDD process is complete our spec also serves as a suite of tests against regression.  This is a hugely valuable side-effect of TDD.  We don’t
Categories: Blogs

Latest Testing in the Pub Podcast: Part II of Software Testing Hiring and Careers

uTest - Tue, 10/21/2014 - 21:02

Testing in the PubThe latest Testing in the Pub podcast continues the discussion on what test managers need to look out for when recruiting testers, and what testers need to do when seeking out a new role in the testing industry.

There’s a lot of practical advice in this edition served over pints at the pub — from the perfect resume/CV length (one page is too short!) to a very candid discussion on questions that are pointless when gauging whether someone is the right fit for your testing team.

Part II of the two-part podcast is available right here for download and streaming, and is also available on YouTube and iTunes. Be sure to check out the entire back catalog of the series as well, and Stephen’s recent interview with uTest.

Categories: Companies

The Mystery of HMRC and the Faulty Tax Statements

By George Wilson, director of Original Software

Barely a tax quarter seems to pass without another HMRC IT glitch hitting the headlines. The latest involves erroneous tax calculations being sent to up to five million people with statements saying that more tax would be collected in the subsequent year, or the individual would receive a rebate cheque. The amount involved is usually about £200.

HMRC has laid the blame solely at the doorstep of employers, saying that the majority of errors were made because employers submitted inadequate or wrong information. But accountancy bodies like ACCA have hit back and said that the Revenue is to blame for mistakes in cross checking information from its own records, self assessment forms and PAYE information from employers. But the system that HMRC and employers use – called Real Time Information – has also had problems.

Tax PapersHMRC, which has pointed out that the numbers affected are likely to be less than 100,000, has said it will “push the boat out” to help people who have already spent their rebate cheques.

Industry insiders have damned the Revenue’s systems and in emails leaked to the Telegraph, have said that the system is “not fit for purpose, it’s inherently flawed and routinely produces errors that cause a huge mess for families and employers.”

The Revenue – no stranger to bad headlines – risks having its reputation tarnished further with its skewed approach to quality. In this situation, it is evident that the worlds of systems and processes failed to align, with inaccurate or inadequate information being fed into the system in the first place and process holes in cross checking and referencing the data.

With the volumes of transactions the Revenue is engaged in, there will always be glitches and problems, but the sheer number of mishaps indicates that something is seriously wrong. And HMRC could do something about it. It needs to take real control of the quality of its systems and processes. Automated testing solutions can run through processes and data and flag up any discrepancies immediately, meaning – in this instance – that data sources could have been verified before statements were sent out.

It is a government department and reputation might not be quite as valuable as it would be to a commercial organization, but the Revenue can’t afford to keep alienating taxpayers and stakeholders. It needs to rein in control of its quality assurance and do so quickly.

The post The Mystery of HMRC and the Faulty Tax Statements appeared first on Original Software.

Categories: Companies

CAST Improves Software Quality Metric Application

Software Testing Magazine - Tue, 10/21/2014 - 19:04
CAST has announced an update to its Application Intelligence Platform (AIP). AIP 7.3 makes measuring software quality across an organization’s application portfolio more automated, informative, and customizable. In AIP 7.3, CAST introduces several ease of use enhancements to its Application Analytics Dashboard (AAD) that allow product managers and CIOs alike to discover actionable insights faster and more easily. New ways to filter and a new view of quality measures from multiple software releases expands the organizations’ ability to quickly zoom in on the relevant quality issues and pinpoint specific changes between ...
Categories: Communities

Open Source Load Testing Tools Comparison: Which One Should You Use?

uTest - Tue, 10/21/2014 - 18:04

This piece was originally published by our good friends at BlazeMeter – the Load Testing Cloud. Don’t forget to also check out all of the load testing tool options out there — and other testing tools — along with user-submitted reviews at our Tool Reviews section of the site.

Is your application, server or service is fast enough? How do you know? Can you be 100% sure that your latest feature hasn’t triggered a performance degradation or memory JMeter-Response-Times-vs-Threadsleak?

The only way to be sure is by regularly checking the performance of your web or app. But which tool should you use for this?

In this article, I’m going to review the pros and cons of the most popular open source solutions for load and performance testing.

Chances are that most of you have already seen this page. It’s a great list of 53 of the most commonly used open source performance testing tools.  However, some of these tools are limited to only HTTP protocol, some haven’t been updated for years and most aren’t flexible enough to provide parametrization, correlation, assertions and distributed testing capabilities.

Given the challenges that most of us are facing today, out of this list of 52, I would only consider using the following four:

  1. Grinder
  2. Gatling
  3. Tsung
  4. JMeter

So these are the four that I’m going to review here. In this article, I’ll cover the main features of each tool, show a simple load test scenario and an example of the reports. I’ve also put together a comparison matrix at the end of this report – to help you decide which tool is best for your project ‘at a glance’ .

The Test Scenario and Infrastructure

For the comparison demo, I’ll be using simple a HTTP GET request by 20 threads with 100 000 iterations. Each tool will be sending requests as fast as it can.

The server (application under test) side:

CPU: 4x Xeon L5520 @ 2.27 Ghz
RAM: 8Gb
OS: Windows Server 2008 R2 x64
Application Server: IIS 7.5.7600.16385

The client (load generator) side:

CPU: 4x Xeon L5520 @ 2.27 Ghz
RAM: 4Gb
OS: Ubuntu Server 12.04 64-bit
Load Test Tools:
Grinder 3.11
Gatling 2.0.0.M3a
Tsung 1.51
JMeter 2.11

The Grinder

The Grinder is a free Java-based load testing framework available under a BSD-style open source license. It was developed by Paco Gomez and is maintained by Philip Aston. Over the year, the community has also contributed with many improvements, fixes and translations.

The Grinder consists of two main parts:

  1. The Grinder Console – This is GUI application which controls various Grinder agents and monitors results in real time. The console can be used as a basic IDE for editing or developing test suites.
  2. Grinder Agents - These are headless load generators; each can have a number of workers to create the load

Key Features of the Grinder:

  1. TCP proxy – records network activity into the Grinder test script
  2. Distributed testing – can scale with the increasing number of agent instances
  3. Power of Python or Closure combined with any Java API for test script creation or modification
  4. Flexible parameterization which includes creating test data on-the-fly and the capability to use external data sources like files, databases, etc.
  5. Post processing and assertion – full access to test results for correlation and content verification
  6. Support of multiple protocols

The Grinder Console Running a Sample Test

image1

Grinder Test Results:

image2

Gatling

The Gatling Project is another free and open source performance testing tool, primarily developed and maintained by Stephane Landelle. The Grinder Gatling also has a basic GUI – limited to test recorder only. However, the tests can be developed in easy-readable/writable domain-specific language (DSL).

Key Features of Gatling:

  1. HTTP Recorder
  2. An expressive self-explanatory DSL for test development
  3. Scala-based
  4. Produces higher load by using an asynchronous non-blocking approach
  5. Full support of HTTP(S) protocols & can also be used for JDBC and JMS load testing
  6. Multiple input sources for data-driven tests
  7. Powerful and flexible validation and assertions system
  8. Comprehensive informative load reports

The Gatling Recorder Window:

Gatling1

An Example of a Gatling Report for a Load Scenario

Gatling2

Tsung

Tsung (previously known as IDX-Tsunami) is the only non-Java based open source performance testing tool in today’s review. Tsung relies on Erlang so you’ll need to have it installed (for Debian/Ubuntu, it’s as simple as “apt-get install erlang”). The development of Tsung was started in 2001 by Nicolas Niclausse – who originally implemented a distributed load testing solution for Jabber (XMPP). Several months later, support for more protocols was added and in 2003 Tsung was able to perform HTTP Protocol load testing.

It is currently a fully functional performance testing solution with the support of modern protocols like websocket, authentication systems, databases, etc.

Key Features of Tsung:

  1. Distributed by design
  2. High performance. Underlying multithreaded-oriented Erlang architecture enables the simulation of thousands of virtual users on mid-end developer machines
  3. Support of multiple protocols
  4. A test recorder which supports HTTP and Postgres
  5. OS monitoring. Both the load generator and application under the test operating system metrics can be collected via several protocols
  6. Dynamic scenarios and mixed behaviours. The flexible load scenarios definition mechanism allows for any number of load patterns to be combined in a single test
  7. Post processing and correlation
  8. External data sources for data driven testing
  9. Embedded easy-readable load reports which can be collected and visualized during load

Tsung doesn’t provide a GUI – for test development or execution. So you’lll have to live with the shell scripts, which are:

  1. Tsung-recorder – a bash script which records a utility capable of capturing HTTP and Postgres requests and creates a Tsung config file from them
  2. Tsung – a main bash control script to start/stop/debug and view the status of your test
  3. Tsung_stats.pl – a Perl script to generate HTML statistical and graphical reports. It requires the gnuplot and Perl Template library to work. For Debian/Ubuntu, the commands are
    –   apt-get install gnuplo
    –   apt-get install libtemplate-perl

The main tsung script invocation produces the following output:

tsung1

Running the test:

tsung2

Querying the current test status:

 tsung3

Generating the statistics report with graphs can be done via the tsung_stats.pl script:

tsung4

Open report.html with your favorite browser to get the load report. A sample report for a demo scenario is provided below:

A Tsung Statistical Report

tsung5

A Tsung Graphical Report

tsung6

Apache JMeter

Apache JMeter is the only desktop application from today’s list. It has a user-friendly GUI, making test development and debugging processes much easier.

The earliest version of JMeter available for download is dated the 9th of March, 2001. Since that date, JMeter has been widely adopted and is now a popular open-source alternative to proprietary solutions like Silk Performer and LoadRunner. JMeter has a modular structure, in which the core is extended by plugins. This basically means that all the implemented protocols and features are plugins that have been developed by the Apache Software Foundation or online contributors.

Key Features of JMeter:

  1. Cross-platform. JMeter can be run on any operating system with Java
  2. Scalable. When you need to create a higher load than a single machine can create, JMeter can be executed in a distributed mode – meaning one master JMeter machine will control a number of remote hosts.
  3. Multi-protocol support. The following protocols are all supported ‘out-of-the-box’: HTTP, SMTP, POP3, LDAP, JDBC, FTP, JMS, SOAP, TCP
  4. Multiple implementations of pre and post processors around sampler. This provides advanced setup, teardown parametrization and correlation capabilities
  5. Various assertions to define criteria
  6. Multiple built-in and external listeners to visualize and analyze performance test results
  7. Integration with major build and continuous integration systems – making JMeter performance tests part of the full software development life cycle

The JMeter Application With an Aggregated Report on the Load Scenario:

jmeter

The Grinder, Gatling, Tsung & JMeter Put to the Test

Let’s compare the load test results of these tools with the following metrics:

  1. Average Response Time (ms)
  2. Average Throughput (requests/second)
  3. Total Test Execution Time (minutes)

First, let’s look at the average response and total test execution times:

image1

Now, let’s see the average throughput:

image2

As you can see, JMeter has the fastest response times with the highest average throughout, followed by Tsung and Gatling. The Grinder has the slowest times with the lowest average throughput.

Features Comparison Table

And finally, here’s a comparison table of the key features offered to you by each testing tool:

Feature The Grinder Gatling    Tsung JMeter OS Any Any Linux/Unix Any GUI Console Only  Recorder Only No Full Test Recorder TCP (including HTTP) HTTP HTTP, Postgres HTTP Test Language Python, Clojure Scala XML XML Extension Language Python, Clojure Scala Erlang Java, Beanshell, Javascript, Jexl Load Reports Console HTML HTML CSV, XML, Embedded Tables, Graphs, Plugins Protocols

HTTP
SOAP
JDBC
POP3
SMTP
LDAP
JMS

HTTP
JDBC
JMS

HTTP
WebDAV
Postgres
MySQL
XMPP
WebSocket
AMQP
MQTT
LDAP

HTTP
FTP
JDBC
SOAP
LDAP
TCP
JMS
SMTP
POP3
IMAP

Host monitoring No No  Yes Yes with PerfMon plugin Limitations

Python knowledge required for test development & editing

Reports are very plain and brief

Limited support of protocols

Scala-based DSL language knowlegde required

Does not scale

Tested and supported only on Linux systems. Bundled reporting isn’t easy to interpret More About Each Testing Tool

Want to find out more about these tools? Log on to the websites below – or post a comment here and I’ll do my best to answer!

The Grinder – http://grinder.sourceforge.net/
Gatling – http://gatling.io/
Tsung – http://tsung.erlang-projects.org/
JMeter
–  Home Page:  http://jmeter.apache.org/
       –  JMeter Plugins: http://jmeter-plugins.org/
       –  Blazemeter’s Plugin for JMeter: http://blazemeter.com/blazemeters-plug-jmeter

On a Final Note…

I truly hope that you’ve found this comparison review useful and that it’s helped you decide which open source performance testing tool to opt for. Out of all these tools, my personal recommendation has to be JMeter.  This is what I use myself  – along with BlazeMeter’s Load Testing Cloud because of its support for different JMeter versions, plugins and extensions.

Categories: Companies

New overview page helps you build your portfolio and deliver on time

Assembla - Tue, 10/21/2014 - 17:19

As follow up to our initiative to improve management of multiple digital projects, we just released the new Portfolio Overview page to make easy to build a new portfolio and manage your deliverables. This page is designed to address the following use cases:

1) Getting Started - The new overview page is where all new portfolios will land upon logging in. You can quickly build your portfolio by inviting team members and creating workspaces. You can review your progress with count of people in your spaces, and the count of your active spaces.

actions.png

2) What your teams should be working on - For existing portfolios with a lot of spaces and deliverables, the overview page will show you any overdue milestones and upcoming milestones or deliverables over the next two weeks. There is also a progress bar indicating the amount of work that is complete and in-progress, with links to the closed and open tickets reports. This will help delivery managers to ensure that their team's get focused on the right priorities. 

milestones.png

3) See what your team is actually working on - We plan to add views that will show you your most active spaces amd most active users. This way the portfolio managers can easily drill down to individual milestones and tickets that your teams are spending most of their time on. 

4) Navigate to other portfolio views and reports - The overview page will serve as the jumping off point to all the portfolio views and reports you care about. We will love to hear from you on what you would like to add to the page.

For all the users who created their portfolios this month, the overview page is your default page in your portfolio. The rest of the portfolio users can access the overview page by clicking on the tab next to your ‘start’ tab. We did not want to force a new default page on you without first letting you know and inviting your feedback.

We will release it as the default page in all portfolios after we confirm it is providing the most important information that you need at 8:00 a.m. with your coffee.  

We will love to hear from you. You can submit your feedback by commenting below or by clicking on the blue bubble icon with the question mark at the right-bottom corner of your Assembla application.

Please subscribe or follow us to get our latest updates!

Best Regards - Maxi.

TRY ASSEMBLA PORTFOLIO FOR FREE

 

Categories: Companies

New feature: Clone tickets

Assembla - Tue, 10/21/2014 - 16:39

New Feature: Clone your tickets!. Many users asked in our feedback site feedback.assembla.com to be able to clone tickets. Now you can do it!

clone_tickets-912748-edited

 

Categories: Companies

Find the Root Cause Faster with Dynatrace 6.1

I am pleased to announce that Dynatrace 6.1 Beta is now available for everyone that is interested in building better performing applications. Dynatrace 6.1 includes many enhancements requested by our 83k+ user community. We took the feedback we received from our users and invested heavily in Ease of Use and more Automatic Diagnostics. Follow these […]

The post Find the Root Cause Faster with Dynatrace 6.1 appeared first on Compuware APM Blog.

Categories: Companies

Do we need a Tech Lead?

thekua.com@work - Tue, 10/21/2014 - 12:03

A common question I hear is, “Is the Tech Lead role necessary?” People argue against the role, claiming a team of well functioning developers can make decisions and prioritise what is important to work on. I completely agree with this position in an ideal world. Sadly the ideal world rarely exists.

Even when perfect conditions exist during which team members talk to each openly, discussing pros and cons before arriving at an agreed solution, it doesn’t take much to upset this delicate balance. Sometimes all it takes is for a new person joining the team, a person leaving or some stressful critical situation that drives the team into a state where arguing continues without end. My friend Roy Osherove calls this the “Chaos state.” I agree with him that a different style of leadership may be required, similar to the Situational Leadership Model.

Technical debates occur frequently in development teams. There is nothing worse than when the team reaches a frozen state of disagreement.

Tabs Spaces
Image take from Emacswiki

The Tech Lead has the responsibility to help the team move forwards. Sometimes that means using their authority. Sometimes it means working with the team to find a way forward. Facilitation and negotiation skills are invaluable assets to a Tech Lead. Understanding decision making models helps the Tech Lead decide when to step in, or when to step back. What is important is finding a way forward.

Tech Leads are also beneficial to people outside of the team, forming a single point of contact. Medium to large organisations start to hit communication barriers because there are too many relationships to effectively build and maintain. The Tech Lead role simplifies the communication path, although simultaneously adds a single point of failure. The balance between these two trade-offs should be carefully managed and monitored.

When played well, the Tech Lead provides countless other benefits, however the Tech Lead role does not have to played by a single person. I admire teams who say they don’t have a Tech Lead and software is still effectively delivered. They have successfully distributed the Tech Lead responsibilities or established processes to mitigate the need for the role. It does not necessarily mean the role itself is useless. The Tech Lead role is just that – just a role. Instead of focusing on whether or not the role should or should not exist, it is better to focus on ensuring all Tech Lead responsibilities are met.

If you liked this article exploring the Tech Lead role, you will be interested in “Talking with Tech Leads,” a book that shares real life experiences from over 35 Tech Leads around the world. Now available on Leanpub.

Categories: Blogs

Automated Acceptance Testing for Mobile Apps with Calabash

Testing TV - Tue, 10/21/2014 - 09:31
Calabash is an open-source technology for automated acceptance testing of mobile native and hybrid apps. It provides a uniform interface to automated testing of Android and iOS apps. Technically, Calabash consists of Ruby (and soon JVM) libraries that provide advanced automation technology on both platforms. Behavior-driven development (BDD) is supported via the Cucumber tool. This […]
Categories: Blogs

Facts and Figures in Software Engineering Research

DevelopSense Blog - Tue, 10/21/2014 - 03:44
On July 23, 2002, Capers Jones, Chief Scientist Emeritus of a company called Software Productivity Research, gave a presentation called “SOFTWARE QUALITY IN 2002: A SURVEY OF THE STATE OF THE ART”. In this presentation, he provided the sources for his data on the second slide: SPR clients from 1984 through 2002 • About 600 […]
Categories: Blogs

TheNexus: A Community Project

Sonatype Blog - Mon, 10/20/2014 - 22:39
With over 42,000 Nexus instances deployed at enterprises around the world, we thought it was time to setup a community based around our products: Nexus and CLM. Earlier this month, we launched the TheNEXUS Community, including exclusive members only content  — where we are already over 700...

To read more, visit our blog at blog.sonatype.com.
Categories: Companies

Mobile App for Jenkins User Conference Bay Area

Jenkins User Conference in Bay Area is this Thursday, and one of the new things this year is the mobile app.

There's an Android version as well as an iPhone version. I've installed it locally, and it's very handy for checking the agenda, get more info about speakers and sponsors.

Categories: Open Source

Testing on the Toilet: Writing Descriptive Test Names

Google Testing Blog - Mon, 10/20/2014 - 20:22
by Andrew Trenk

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

How long does it take you to figure out what behavior is being tested in the following code?

@Test public void isUserLockedOut_invalidLogin() {
authenticator.authenticate(username, invalidPassword);
assertFalse(authenticator.isUserLockedOut(username));

authenticator.authenticate(username, invalidPassword);
assertFalse(authenticator.isUserLockedOut(username));

authenticator.authenticate(username, invalidPassword);
assertTrue(authenticator.isUserLockedOut(username));
}

You probably had to read through every line of code (maybe more than once) and understand what each line is doing. But how long would it take you to figure out what behavior is being tested if the test had this name?

isUserLockedOut_lockOutUserAfterThreeInvalidLoginAttempts

You should now be able to understand what behavior is being tested by reading just the test name, and you don’t even need to read through the test body. The test name in the above code sample hints at the scenario being tested (“invalidLogin”), but it doesn’t actually say what the expected outcome is supposed to be, so you had to read through the code to figure it out.

Putting both the scenario and the expected outcome in the test name has several other benefits:

- If you want to know all the possible behaviors a class has, all you need to do is read through the test names in its test class, compared to spending minutes or hours digging through the test code or even the class itself trying to figure out its behavior. This can also be useful during code reviews since you can quickly tell if the tests cover all expected cases.

- By giving tests more explicit names, it forces you to split up testing different behaviors into separate tests. Otherwise you may be tempted to dump assertions for different behaviors into one test, which over time can lead to tests that keep growing and become difficult to understand and maintain.

- The exact behavior being tested might not always be clear from the test code. If the test name isn’t explicit about this, sometimes you might have to guess what the test is actually testing.

- You can easily tell if some functionality isn’t being tested. If you don’t see a test name that describes the behavior you’re looking for, then you know the test doesn’t exist.

- When a test fails, you can immediately see what functionality is broken without looking at the test’s source code.

There are several common patterns for structuring the name of a test (one example is to name tests like an English sentence with “should” in the name, e.g., shouldLockOutUserAfterThreeInvalidLoginAttempts). Whichever pattern you use, the same advice still applies: Make sure test names contain both the scenario being tested and the expected outcome.

Sometimes just specifying the name of the method under test may be enough, especially if the method is simple and has only a single behavior that is obvious from its name.

Categories: Blogs

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today