Skip to content

Feed aggregator

Happy Holidays from HP StormRunner Load. 3 new presents.

HP LoadRunner and Performance Center Blog - Tue, 12/23/2014 - 23:06

iStock_000021187110Medium.jpgI don’t know about you, but I’m quickly wrapping up 2014.  What an AWESOME year it has been!  The spirit of innovation and new technology is everywhere.   Here’s one highlight that I want to share with you:

 

Since launching, StormRunner Load in October, we’ve been busy working to make sure you have the ideal platform for massive cloud and agile testing.   Here are a few highlights of what you will find as NEW features in StormRunner Load.

Categories: Companies

uTest Debuts Projects Board, New Profile Design

uTest - Tue, 12/23/2014 - 20:51

As we outlined in our Town Hall Meetings last week, we’ve been busy making some big improvements on the uTest site and we’re happy to tell you about the recent enhancements we rolled out this week.

Paid Projects Board

uTesters may be familiar with using the uTest Forums for finding active Paid Projects that have unique requirements. We debuted the Projects Board to streamline the search and application process, and to easily call out which projects are most urgent.

  • Featured Listings will appear at the top of the main page. These projects have urgent deadlines or have very unique requirements.
  • Project Dates are an estimation of the start and end dates of the project. As you know, test cycle dates can slip for a variety of reasons, but the start/end dates are meant to give you an idea of when a project will run.
  • Tags are visible on each listing and can be used to further search for listings with that same tag.

Remember, only uTesters with Expanded Profiles are eligible for Paid Projects. Need to expand your profile? Learn how in uTest University.

Projects-Board

Enhanced Profile Experience

The uTest Profile also got a UX/UI upgrade and now includes customizable features. You can connect your social profiles to your uTest profile so that fellow testers can connect with you on social media as well. We’ve also made it easier to recommend or to request a recommendation from a fellow uTester. If you haven’t visited your profile lately, swing by and see what’s new.

Don’t forget: Check out our Leaderboard or our Search page if you are looking for uTesters to help fill your network activity feed.

new-profile

Revamped Home Page

Last, but not least, our new home page is designed to get you where you want to go. We also have a new Getting Started page for testers who are new to uTest or who have yet to join us. Be sure to check out our site and let us know what you think in the comments below!

New-utest-home page

Categories: Companies

AutoMapper 3.3 feature: Projection conversions

Jimmy Bogard - Tue, 12/23/2014 - 20:42

AutoMapper 3.3 introduced a few features to complement the mapping capabilities available in normal mapping. Just like you can do ConvertUsing for completely replacing conversion between two types, you can also supply a custom projection expression to replace the mapping expression for two types during projection:

Mapper.CreateMap<Source, Dest>()
    .ProjectUsing(src => new Dest { Value = 10 });

Occasionally, you don’t want to replace the entire mapping, but there’s just a constructor you need to have some custom logic in:

Mapper.CreateMap<Source, Dest>()
    .ConstructProjectionUsing(src => new Dest(src.Value + 10));

AutoMapper can automatically match up source members to destination constructor parameters, but if things don’t quite line up or you need some additional logic, you have a place to do so.

Finally, AutoMapper can automatically convert types to strings (just calling .ToString()) for any time it sees a string on the destination type:

public class Order {
    public OrderTypeEnum OrderType { get; set; }
}
public class OrderDto {
    public string OrderType { get; set; }
} 
var orders = dbContext.Orders.Project().To<OrderDto>().ToList();
orders[0].OrderType.ShouldEqual("Online");

If you need to do more custom logic for a string conversion, you can either supply a global conversion (ProjectUsing) or a member conversion (MapFrom).

Finally, if you have some custom projection expansions and you only want to expand certain members on the destination, you can supply a series of projection expressions and only those explicitly specified members will be included in the resulting expansion:

dbContext.Orders.Project().To<OrderDto>(
    parameters = null,
    dest => dest.Customer,
    dest => dest.LineItems);
// or string-based
dbContext.Orders.Project().To<OrderDto>()
    parameters = null,
    "Customer",
    "LineItems");

I would usually recommend against doing this, but I’ve seen cases such as OData that this sort of scenario is expected. In the next post, I’ll cover one of the more interesting features in 3.3 – parameterized projections. Enjoy!

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

New uTest Platform Features: Bug Reporting and Communication Enhancements

uTest - Tue, 12/23/2014 - 18:12
“Quality is not an act — it’s a habit.”

-Aristotle

New Features: +1

The nature of uTest testing is such that the tester who files a bug report first receives the credit. This can be frustrating for testers who invest the time to find a bug only to see that it has been previously reported, but on a different environment. While the tester who files the bug first will continue to receive the credit, we want to increase the collaborative nature of the testing experience and allow testers to +1 another’s bug report.

We are pleased to announce that today we released a +1 Feature to the uTest Platform for testers on paid projects. This feature allows testers to confirm the reports of others by selecting one or multiple environments on which they reproduced the issue. They can also enter an optional comment with clarifications.

platform1

platform2

In addition to fostering a more collaborative testing team, we also hope that testers will use this feature to increase the quality that uTest delivers to its customers by demonstrating which issues have the highest instance of reproducibility.

Feature Improvements

Copy & Paste Attachments: Testers can now copy and paste screenshots directly from their ‘clipboard’ into their bug reports. This will make it much easier to add screenshots when testing web and desktop applications. Note: The feature is only available in Chrome at this time.

Browser Tab Notifications: When testers open a test cycle in the Platform, they will now see the number of unread messages from that cycle in the tab of their browser.

platform3

Currency Conversions: Testers are now able to convert their current payout and their payment history to the currency of their choice via the Settings page. Two important things to note about this feature: First, testers will still be paid in US dollars. This conversion will just show how much was earned in the local currency. Second, conversions of earnings will be at today’s rate rather than historical, so converting past payments will be a bit skewed depending on the currency’s historical valuation.

platform4

Chat Notifications: Testers now have the option of enabling their computer to make a ‘beep’ sound when they receive new chat messages. This can be activated or disabled via the Settings page.

platform5

If you like what you see, feel free to leave your comments below, or share your ideas on these and other recent platform updates by visiting the uTest Forums. We’d love to hear your suggestions, and frequently share this valuable feedback with our development team for future platform iterations!

Categories: Companies

We Automated 12 Android Phones To Sing ‘Appy Holidays To You [VIDEO]

Sauce Labs - Tue, 12/23/2014 - 17:56

With Holiday Season and the close of the fiscal year approaching, we brought in all of our remote workers to spend some time together. We had the obligatory corporate holiday party, a late-night LAN party, and countless, invaluable meetings that laid the foundation for software development projects in the new year.

Our VP of Engineering also arranged our first hackathon, in which we submitted ideas for programming projects that were tangentially related to our business. To participate, we were broken into teams and powered through our ideas and execution over two days.

Some of our developers have been interacting with Android phones, but haven’t had a chance to play with them. Thus, I ended up on a team of three with the goal of getting multiple Android devices to sing in harmony.

We’re happy to share the result of our efforts with this cheerful holiday video.

See the original video here. Follow the conversation on Reddit here and here and on HackerNews here.

So, how does it work?

We went with a pretty quick-and-dirty approach, seeing as this was a two-day event. Having multiple devices sync musical notes in realtime was definitely out of the question, so we opted for seeing if we could coordinate the phones to begin playing their individuals parts of a song at the same time.

Android devices are finicky and difficult to coordinate, many of their operations take varying amounts of time, even on identical devices. Simply pushing a song to each of them and telling them to play, results in some phones playing within a second while others can be up to four seconds behind.

Here’s what we did:

@classam wrote a python script which takes any MIDI file and distributes its separate instrumental parts onto an arbitrary number of tracks. It’s pretty neat: specify just two tracks, and half the parts of the song will be played on one track, and half on the other. If you specify more tracks than available parts, it copies the more significant parts onto the leftover tracks so every phone will feel like it’s a valued member of the choir.

I wrote an Android app which runs on the phones. The app runs on each phone and listens on a socket. When it gets the ‘GO’ command over the USB cable, it plays the music file it’s been given. It also displays a musical visualization of the sound it’s playing so you can match up the phones to the sounds you hear.

@etmillin wrote a javascript program which finds all the Android devices connected to a computer, gets a connection to the song App running on each one, and sends the ‘GO’ command to all of them at once.

When you glue our three pieces together, the following happens:

  • A MIDI song gets broken into parts, one for each phone
  • The parts of the song get uploaded, one to each phone
  • The song-playing app gets started on all the phones
  • The ‘GO’ command gets sent to all the phones at once

Now they all play together!

Other hackathon projects included experimenting with different types of network messaging solutions, devOps management tools, ECMAscript7, contributing to open source projects like Travis CI, and building hardware which displays the state of our continuous integration build.

Be warned, we didn’t polish the code for this blog post but you can find it on github: https://github.com/classam/5rat

-Jonah Stiennon, Ecosystems, Sauce Labs

Want to work at Sauce Labs? Submit your resume! https://saucelabs.com/careers

Categories: Companies

Webinar Q&A: Continuous Delivery and Pipeline Traceability with Jenkins and Chef

Thank you to everyone who joined us on our webinar, the recording is now available.

Because of the technical difficulties in recording this webinar, here is another recording of the missing information.

And the slides are here.

Below are the questions we received during the webinar Q&A:


Q: Has Chef-client output come out on console or have specific things been captured in this traceability case?
A: I just wanted to show the generated report JSON at the very end of the Chef run. By default, the output is usually redirected to /var/log/chef/client.log, but if you run Chef-client manually you’ll see the output on the stdout of course.

Q: On one slide, it was mentioned to use Environments to version recipes. This is a best practice question then, does that slide then suggest that environments should be of type "appName-Version" and applied to nodes? It points to the cookbooks that are versioned. So that would be different than using Environments of like "test", "dev", etc.
A: For clarity, the recommendation is to use Environments to pin specific cookbook versions to a particular subset of nodes. You can still use environment names like “test”, “dev”, etc. For example, I have “mycookbook” version 1.2.3. I want to roll out “mycookbook” version 1.3.0. Set your CI job to first update “dev” with ‘cookbook “mycookbook”, “= 1.3.0”’, while “test” has ‘cookbook “mycookbook”, “= 1.2.3”’. When you are ready to ready to promote this change from dev to test, the CI job that promotes this to test should set the definition for the “test” environment to ‘cookbook “mycookbook”, “= 1.3.0”’.

Q: What licensing restrictions are there on using open-source Chef in the enterprise? Is it just the enterprise features?
A: You can use OSS Chef for free, no restrictions. In fact, you can also demo the Enterprise paid features from OSS. You can use all the Enterprise features for under 25 nodes for free. For more than 25 nodes, you have to license the Enterprise features.

Q: Do we have a cookbook for the Weblogic server ?
A: You can see all the cookbooks shared in Supermarket at http://supermarket.chef.io but you can also find many others not on Supermarket via GitHub. There is a weblogic cookbook currently on the Chef Supermarket site.

Q: Would a simple app code only change result in a change to a cookbook (tweak version of app to deploy?) and also trigger app unit / acceptance tests during the Test Kitchen execution?
A: You don't *have* to structure change that way, but I'd recommend it. I find it's easier for auditing to discover what happened, where and when. You could skip those tests, but I wouldn't recommend it unless you're in some emergency breakfix scenario.

Q: Can you recommend a Jenkins plugin for pipeline?
A: For the build pipeline plugin, I'd recommend Build Flow. You could start with something simpler like the Build Pipelines plugin, but that doesn't allow for concurrency. You might not need concurrency. But you might. :-) So I'd recommend starting there.

Q: How do you decide what version to automatically increment in the Jenkins Build Job?

A: Your cookbooks are tagged with versions in them: X.Y.Z (major.minor.patch). If you submit any new change without rolling X or Y, the pipeline just increments Z based on the last known good version. If you roll X or Y, Z resets to zero and the CI job manages versions from there. This makes Z in whatever source you submit effectively useless. The pipeline owns Z, you own X and Y.
Q: Do you have example cookbooks showcasing this setup available for review somewhere, like a GitHub repo?
A: Sure, you can grab the the scripts, cookbooks and configs used in this webinar from my GitHub repository: https://github.com/woohgit/jenkins-chef-traceability-example

Q: Can you provide some tools/utilities for testing the Chef code? If I am not using Vagrant can I use Test-Kitchen. It would be great if you could some utilities which can be used to test Chef code without using Vagrant.

A: Is there a particular aversion to using vagrant? You could run tests by hand, I suppose. Recommended practice is to use vagrant via test-kitchen; it makes your life much easier. You could also try something like minitest and enable the chef-minitest report handler. If there’s a reason you can’t use vagrant, let’s talk about that and figure out a workable approach.
Q: Does Vagrant simply spin up a VM with Chef installed in it?
A: Test-kitchen effectively manages your vagrant files and sets them up to automatically bootstrap chef and grab your cookbook code out of your PWD. You can also use the kitchen config file to pass configs on to vagrant directly if you want addl functions. Check out the bento project for more details on what happens to your vagrant box -- https://github.com/opscode/bento
Q: Can you put together a list of all technologies in this demo (Chef, Jenkins, Ruby, Vagrant, etc.)?
A: I think the best is to check out the source code from my GitHub repository. It has a README which contains the answers for your question. My GitHub repository can be found here: https://github.com/woohgit/jenkins-chef-traceability-example

Q: How does the Jenkins plugin obtain information about the status of the Chef deployment?
A: Chef generates a report at the end of Chef-client run and the Chef-handler-Jenkins gem - which is included in the cookbooks - selects only the file related changes and sends a POST request to a specific Jenkins URL. The Chef Tracking Plugin handles that POST data. To be more precise: the Chef Tracking Plugin exposes an API endpoint: http://<JENKINS_URL>/chef/report which is used by Chef to send the reports.

Q: Just saw that Chef mentioned it had deployed a war as a result of the Jenkins job. Is it possible instead to deploy other types of artifacts? E.g. install an .exe in Windows, or apply RPM update packages in SUSE/RHEL?
A: Sure, you can deploy any type of files. There is absolutely no restriction.

Q: Is there a way to trigger deployments from within Jenkins?
A: Sure, you can. You can configure Jenkins to start a build when a commit has been pushed to the repository, or a job A can trigger job B if the build was successful. There are a lot of plugins as well which makes this easier. I’d suggest this one:
https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin


Q: IS there any weblogic cookbook?
A: Answered above.

Q: Does Chef use something like a snapshot to revert any manual changes made to nodes or does it only apply settings outlined in a recipe?
A: Snapshots are not used or recommended. Everything in Chef is explicit: Chef only manages settings outlined in your recipe. Chef is not a magic pony and it does not somehow automagically understand everything that has changed on your system.

Q: We speak about serverspec (which tests the server) but is chefspec also needed? To test the cookbook-recipe code?.. Or is Serverspec good enough of a test to move things into production?
A: There are a number of test frameworks to use and I’m a big fan of chefspec. The proposed workflow was an example of the types of functions we may want. I would recommend both testing your code with chefspec and testing your resulting infrastructure with serverspec at different points in your development lifecycle.

Q: What is the best method of using Chef master/master or master/slave setup?
A: High Availability in Chef is accomplished as active/passive. More information on running Chef in HA mode can be found here -- https://docs.chef.io/server_high_availability.html

Q: How can I perform hotfix configs using Chef?
A: I think the question is how to deploy a hotfix in this type of pipeline? In the proposed workflow, the most obvious way to deploy a hotfix is to merge it straight into master (bypassing the validation/code review cycles). Merge the code and let your pipeline promote that programatically.

Q: Can we spawn VMs in the cloud instead of using Vagrant?
A: Since version 1.1, Vagrant is no longer tied to VirtualBox and also works with other virtualization software such as AWS EC2. Check for specific vagrant plugins here: http://vagrant-lists.github.io/, especially the vagrant-aws plugin. Vagrant is only a wrapper around VirtualBox, KVM, AWS EC2, DigitalOcean and so on.

Q: How do you see the future of Chef given the rise of Docker? Do you see that the adoption of Chef will increase or decrease as the adoption of Docker increases?
A: Containers also require configuration. It will be interesting to see where Docker goes in the future. While creating a container is simple with Docker, running complex infrastructure topologies in production is difficult. Complexity never entirely goes away, as engineers we may just move it to different parts of the stack. Configuration management still has a strong role even in a containerized world. I encourage you to look at the work we’ve done with Chef Container for more ways to use Chef with container -- https://docs.chef.io/containers.html.
Categories: Companies

Automated Performance Analysis for Web API Tests

A modern web application is typically not restricted to be used via a web frontend. It also provides functionality that is used elsewhere like mobile apps. Think about an e-commerce site today: a shop does not get its business exclusively from sales on its web shop, but also through mobile apps, and through rich-client applications […]

The post Automated Performance Analysis for Web API Tests appeared first on Dynatrace APM Blog.

Categories: Companies

Automated Performance Analysis for Web API Tests

A modern web application is typically not restricted to be used via a web frontend. It also provides functionality that is used elsewhere like mobile apps. Think about an e-commerce site today: a shop does not get its business exclusively from sales on its web shop, but also through mobile apps, and through rich-client applications […]

The post Automated Performance Analysis for Web API Tests appeared first on Dynatrace APM Blog.

Categories: Companies

Nexus 2.11.1 – Why It’s Time to Upgrade

Sonatype Blog - Tue, 12/23/2014 - 11:00
TL; DR: The release of Nexus 2.11.1 includes a fix for the security vulnerability CVE-2014-9389. Whenever a new Nexus release becomes available there are a myriad of reasons to upgrade. The team always seems to manage to bring in some really useful new features or bug fixes that you have been...

To read more, visit our blog at blog.sonatype.com.
Categories: Companies

Low Level Considerations for VS of the Future (an old memo)

Rico Mariani's Performance Tidbits - Tue, 12/23/2014 - 10:44

I wrote this a long time ago.  It's very interesting to me how applicable this is today.  And how it is not very specific to Visual Studio at all...


Low Level Considerations for VS of the Future
Rico Mariani, Sept 12/2007

Introduction

I’ve been giving much thought to what enabling steps we have to take to make our VS12+ experience be something great.  I will not be trying to talk about specific features at all, I’m not sure we are anywhere near the point where  we could talk about such things but I do consider the prototype we’ve seen as sort of representative of what the underlying needs would be for a whatever experience we ultimately create.

So without further ado, here are some areas that will need consideration with information as to what I think we need to do, broadly, and why.

Memory Usage Reduction

Although we can expect tremendous gains in overall processor cycles available to us within the next decade we aren’t expecting similar gains in memory availability – neither L2/L0 capacity, nor total bandwidth.  Project sizes, file sizes, and code-complexity generally is going up but our ability to store these structures is not increasing.
To get past this, and enable the needed parallelism we must dramatically reduce our in-memory footprint for all key data structures.  Everything must be “virtualized” so that just what we need is resident.  In a IDE world with huge solutions in many windows vast amounts of the content must be unrealized.  Closing is unnecessary because things that are not displayed have very little cost.

Investment -> Ongoing memory cost reduction

Locality

In addition to having less overall memory usage, our system must also have very dense data structures.  Our current system trends to the “forest of pointers” direction with in-memory representations being not only larger than on-disk representations but also comparatively rich in pointers light on values.

More value oriented data-structures with clear access semantics (see below) will give us the experience we need.  The trend to expand data to “pre-computed” forms will be less favorable than highly normalized forms because extra computation will be comparatively cheaper than expanding the data and consuming memory.   Memory is the new disk.

Investment -> Drive density into key data structure, use value rich types

Transactions

Key data structures, like documents, need the notion of transactions to create unit of work needs.  This is important for many cases where rollback is critical.  There need not be anything “magic” about this – it doesn’t have to be transacted memory, it’s merely transaction support in the API.  Important to help resolve parallelism concerns, and disconnected data concerns, the notion that reads and writes might fail is important and at that point atomicity of operations is crucial.  Transacted documents may or may not imply locking or versioning but don’t have to.  See the next topic.

Investment -> Key data-structures gain a transacted API
Investment -> Key data-structures use transactions to communicate “scope of change” after edits.

Isolation

Having an isolation model means you can understand what happens when someone changes the data structure out from under you.  What changes will you see and what won’t you.   How much data can you read and expect to be totally self-consistent.

Investments -> Key data-structures must have an isolation model

A UX that is not strongly STA based

The IDE of the future must focus presentation in a single concentrated place but, in a game-like fashion, produce “display lists” of primitives, delivered to the rendering system in an “inert” fashion (i.e. as data not as API calls) so that presentation is entirely divorced from computing the presentation and the rendering system is in fact always ready to render.

This is entirely harmonious with the fact that we want all these display lists to be virtualized. You need not “close” windows – merely moving them out of frame is enough to reclaim the resources associated with the display.  The 1000s of windows you might have cost you nothing when you are not looking at them.

Investment -> A standard method for pipelining ready-to-render displays
Investment -> an STA free programming model with non-STA OS services (e.g. file open dialog)

Twin Asynchronous Models Pervasively

It is easiest to illustrate these two needs by example:

  1. Search for “foo” 
    • This can be done in a totally asynchronous fashion with different “threads” proceeding in parallel against different editor buffers or other searchable contexts.  All the “foos” light up as they are discovered.
    • Foos that are not visible need not be processed, so this sort of parallelism is “lazy”
  2. Rename “foo” to “bar”
    • We can still do this asynchronously but now there is an expectation that we will eventually do it all and of course we must do all the foos not just the visible ones
    • Generally this requires progress, and a transaction
    • It can fail, or be cancelled, these are normal things

Both of these represent two cases where we can use the “coarse” parallelism model.  Of course we also would like to do fine-grained parallelism in places and in fact searching could be amenable to this as well.  Fine-grained has the opportunity to keep locality good because you stream over the data once rather than access disparate data streams all at once but it is of course more complicated.  Enabling both of these requires the previous investments:

  • Limited memory usage in the face of many threads running
  • Ability to access the data in slices that are durable and limited in span
  • Clear boundaries for write operations that may happen while the work is going on
    • This allow allows for notifications
    • This allows for a clear place to re-try in the event of deadlocks/failures
  • Isolation so that what you see when you change or update is consistent to the degree that was promised (it’s part of the contract)
  • No STA UI so that you can present the results in parallel without any cosmic entanglements outside of the normal locking the data-structures require – no  “stealth” reentrancy

Investment -> “Lazy” asynchronous display operations
Investment -> Asynchronous change operations
Investment -> Failure tolerant retry like in good client server applications


Conclusion

With the above investments in place it becomes possible to use multi-threading APIs to create high speed experiences that allow for real-time zooming, panning, searching, background building, rich intellisense, and other algorithms that can exploit the coarse parallelism abundant in large solutions.   This can be done with a variety of threading models – I like data pipelined models myself but they aren’t necessary.  Access to key data-structures increasingly appears to be similar to how a database would be handled to customers of the data.

Even with little to no change in actual processor dispatch mechanisms we get these benefits:

  • The underlying data storage is enabled to take advantage of CPU resources available
  • Other factors which would make parallelism moot are mitigated

Under these circumstances a “game quality” UX is possible – that’s the vision for VS12.  To get there, we need to start thinking about these considerations now so that our designs begin to reflect our future needs.

Categories: Blogs

Selenium Hangout 6 Recap

Selenium - Tue, 12/23/2014 - 03:28

01:35 – 9:45 W3C Update
Notes from most recent W3C Meeting
Highlights:
– changes to the get_attribute method call
– screenshots (changing to viewport only, eventually will support whole page)
The WebDriver W3C working group has a GitHub repo now
– WebDriver will move from a “REST-ish” to a more “RESTful” interface

11:23 – 16:00 Selenium 3 Status Update

16:05 – 17:10 Marionette (FirefoxdDiver rewrite) testing help 
Marionette Roadmap

17:20 – 19:27 ChemistryKit rewrite
Announcement blog post

17:28 – 20:24 Visual Testing Part 1
Getting Started with Visual Testing
Applitools (visual testing cloud solution built on top of WebDriver)

20:25 – 23:47 Selenium Guidebook in Java!
The Selenium Guidebook

23:52 – 29:51 Visual Testing Part 2
Web Consistency Testing
Why MogoTest won’t be open sourcing it’s code after shutting down
Michael Tamm’s GTAC talk on Fighting Layout Bugs
Getting Started with Visual Testing


Categories: Open Source

When Programmers (and Testers) Do Their Jobs

DevelopSense Blog - Tue, 12/23/2014 - 00:23
For a long time, I’ve admired Robert (“Uncle Bob”) Martin’s persistent advocacy of craftsmanship in programming and software development. Recently on Twitter, he said . @LlewellynFalco When programmers do their jobs, testers find nothing. — Uncle Bob Martin (@unclebobmartin) December 8, 2014 One of the most important tasks in the testing role is to identify […]
Categories: Blogs

Webinar Q&A: Analyze This! Jenkins Cluster Operations and Analytics

Thank you to everyone who joined us on our webinar, the recording is now available.

And the slides are here.

Below are the questions we received during the webinar Q&A:


Q: Is the access control able to serve as a middle point between users and a backing AD/LDAP setup? Defining custom groups that just matter to Jenkins, for instance. Or it just centralizes the config?
A: Yes, CloudBees Role Based Access Control allows you to use a group provided by AD/LDAP or to define your groups in Jenkins.

Q: For these ES analytics, what DB strategy do you use actually? I mean NOSQL or conventionally RDBMS?
A: We use Elasticsearch, which is a document-oriented database and search engine.

Q: How well does the Operation Center servers scale, can it run on multiple instances with a load balancer?
A: Jenkins Operations Center can be clusterized with a load balancer. The load on JOC is limited because it is mostly an orchestrator. JOC can orchestrate dozens of Masters and hundereds of slaves

Q: How do you sync jobs, configs, etc. among Jenkins masters?
A: Jobs and configurations are not synced between masters per se. If you are referring to the HA feature in Jenkins Enterprise, this is done via a shared filesystem between the hot and cold master.

Q: Can the update center help to deploy any resources to the instance's file system that are not part of the Jenkins configuration or plugins? Or is the update limited to the bounds of Jenkins?
A: Custom Update Centers not only serve plugin and Jenkins-core files but also serve tool installers. Popular Tool Installers are installers of Git, JDK, JVM, Maven ... In that sense, Update Centers also deal with deployment of resources of slaves

Q: Do the analytics support a sort of change-back or throttling model to prevent greedy jobs from hogging too much of the resource pool?
A: Analytics is only a reporting engine. It does not affect the slave scheduling behavior.

Q: Are the metrics you generate limited by the amount of history you retain in your Jenkins instance?
A: Builds are reported in real-time, but you can re-index historical builds using a cluster operation. Builds are retained for 3 years by default in the analytics database, even if they are deleted on the remote Jenkins instance.

Q: Is there an API that will allow us to serve up the Jenkins performance charts on an internal website to our clients?
A: We provide the elasticsearch api which you can access using a Jenkins API key.

Q: Are there alerts in form of notifications on analytics sent to admins?
A: You can configure email alerts to be sent when internal metrics reach a threshold.

Q: We periodically see heap or permgen issues in our builds, but the JVM is the one called up by the Maven process to compile the code, not the master instance itself. Would the analytics view allow us to see the JVM memory for the JVM running the compiles?
A: No, Analytics does not include the JVM memory at this time.

Q: If you only have 2 VMs/servers, would it be best just to have 2 masters, or would it be best to create slaves on the existing hardware as the masters to segregate?
A: It's usually best to run builds on slaves before you begin adding more masters.

Q: Can you export the analytics/metrics to an external graphite/grafana server?
A: The performance metrics can be reported to graphite using DropWizard metrics graphite plugin.

Q: Would this be able to interact with something like the Jenkins Mesos plugin similar to the system eBay has set up? I'd like to use Docker containers for my slaves.
A: http://www.slideshare.net/cloudbees/analyze-this-jenkins-cluster-operations-and-analytics
Categories: Companies

Testbirds and Perfecto Mobile Partner on Mobile Testing

Software Testing Magazine - Mon, 12/22/2014 - 18:05
Testbirds, a crowdtesting company in Europe, and Perfecto Mobile, a mobile application quality vendor, have announced a partnership that combines their extensive testing expertise to further increase mobile application quality. Both companies specialize in mobile testing solutions for enterprises to test and optimize their applications. Perfecto Mobile offers the Continuous Quality Lab, their on-demand, cloud-based offering that enables the testing (automated and manual functional testing, monitoring and performance) of mobile apps under any real-world end-user condition throughout all stages of the software development lifecycle. Testbirds provides a crowd of professional testers ...
Categories: Communities

Testers: Challenge Yourself Just Barely Above Your Current Skill Level

uTest - Mon, 12/22/2014 - 16:30

This article is about a new testing acronym: CMJBMCSL. Read alosteel-minimal-staircaseng to see what it means.

I have been reading lately from Stephen King’s book “On Writing,” where he talks, among other things, about what helped him become a better writer and eventually have success. Here’s the main three:

  • Have lots of practice
  • Be driven by honest feedback: Over the years, his work was rejected hundreds of times by editors and magazines; every time he got a rejection note, the note was placed on a nail into the wall; the nail could not support the weight of the rejection notes after a while, so it was replaced by a spike
  • Challenge yourself just barely above your current skill level

The simplicity of the three points of Stephen King’s training made complete sense. But there was something bothering me about the third point: challenging himself just barely above his current skill level.

I realized though that this is exactly what I have been doing to become better in my profession

Challenge
Yourself
Just
Beyond
Your
Current
Skill
Level

I finally have a name for my not-very-well-understood-at-that-time learning process!

How can a tester use CMJBMCSL for improving his or her career? Here is a possible learning path using CMJBMCSL that goes from manual testing, to learning technical skills, to test automation:

Start with manual testing

Work on scripted manual testing

Learn exploratory testing

Look into test-design techniques

Mind mapping

Read about lateral thinking

Start learning technical skills

Understand how browsers work

Read about how HTTP protocol works

Learn about get and post web requests

Learn about URL parameters, parameter coding and decoding

Learn about browser sessions and cookies

Use a web proxy

Look at  the HTML page source

Learn HTML

Use browser plugins (Firebug, Firepath, FireCookie)

Understand how the browser DOM works

Use XPATH for locating elements in the browser DOM

Read about web elements formatting using CSS

Learn the basics of JavaScript

Record and play with Selenium IDE

Learn the Java basics

Practice regular expressions

Install Eclipse

Learn JUNIT

Learn Web Driver

Practice test automation

Learning like this is similar to climbing a staircase, step by step. The order of these skills is not very important. What is important is that each new skill is just above what the tester knows at that time. Being just above the current tester level makes the learning of a new skill a realistic challenge.

What do you think? Can this process work for you as well?

Alex Siminiuc is a uTest Community member and Gold-rated tester and Test Team Lead on paid projects at uTest. He has also been testing software applications since 2005…and enjoys it a lot. He lives in Vancouver, BC, and blogs occasionally at test-able.blogspot.ca.

Categories: Companies

Ranorex Ranked 146 on the Deloitte Technology Fast 500 EMEA List of Fastest Growing Companies

Ranorex - Mon, 12/22/2014 - 16:15
With a combined growth rate of 1129% over 5 years, we have secured a position of 146th on the list of 500 fastest growing companies in 2014.

We would like to thank all of our 1700 clients for caring so much about test automation. It's great to see how Ranorex products help them to excel with their automated testing.

Annual ranking of the fastest growing technology companies   About Deloitte Technology Fast 500 EMEA
The Deloitte Technology Fast 500 EMEA program is the region's most objective EMEA regional industry ranking focusing on the technology field, recognizing technology companies that have achieved the fastest rates of revenue growth in Europe, the Middle East, and Africa (EMEA) over the past 5 years. Combining technological innovation, entrepreneurship and rapid growth, Fast 500 companies - large and small, public and private - span a variety of industry sectors, and are leaders in hardware, software, telecom, semiconductors, internet, media and life sciences along with emerging areas, such as clean technology.
Categories: Companies

Not on Twitter

Thoughts from The Test Eye - Mon, 12/22/2014 - 12:49
Ideas

I don’t have a Twitter account.

I read Twitter now and then, it contains useful information, but I don’t have the time to do it properly. For me, doing it properly would mean to often write thoughtworthy things within 140 characters.

I only have one of those, so better publishing it in a blog post:

What about doing manual regression testing to free up time to make valuable automation?

So, that was one Christmaas tweet, and it did the opposite of decreasing my blogging frequency (which is the general drawback of testing tweeting in my opinion.)

Categories: Blogs

Stop hugging, start working … on excellence!

Some context: this blogpost is my topic for a new peer conference called “Board of Agile Testers (BAT)” on Saturday December 19 2014 in Hotel Bergse Bossen in Driebergen.

I love agile and I love hugging… For me an agile way of working is a, not THE, solution to many irritating problems I suffered from in the 90’s and 00’s. Of course people are the determining factor in software development. It is all about people developing (as in research and development) software for people. So people are mighty important! We need to empower people to do awesome work. People work better if they have fun and feel empowered.

Vineet Nayar talks about people, who want to excel, need two important things: a challenge and passion. These factors resemble the ones described by Daniel Pink: autonomy makes room to excel, passion feeds mastery and a challenge gives purpose. I wrote an article about this subject for agile record called “Software development is all about people“. I see agile teams focus on this people stuff like collaboration, working together, social skills… But why do they often forget Mastery in testing?

Rapid Software Testing teaches serious testing skills by empowering testers in a martial art approach to testing. Not by being nice and hug others. By teaching testers serious skills to talk about their work, say what they mean, stand up for excellence. RST teaches that excellent testing starts with the skill set and the mindset of the individual tester. Other things might help, but excellence in testing must centre on the tester.

One of the many examples is in the new “More agile testing” book by Lisa & Janet in chapter 12 Exploratory testing there is a story by Lisa: “Lisa’s story – Spread the testing love: group hugs!” My review comment was and I quote: “I like the activity but do not like the name… I fear some people will not take it too serious… It might get considered too informal or childish. Consider a name like bug hunts.”

Really? Hugs? The whole hugging ethos in agile makes me CRAZY. Again, I love hugging and in my twitter profile it says I am a people lover. But a fluffy approach to agile in general and testing in particular makes me want to SCREAM! It makes me mad! Stop diminishing skills. If people are doing good work, sure hug them, but if they don’t: give them some serious feedback! Work with them to get better and grow. Mentor them, coach them, teach them. But what if they do not improve? Or do not want to improve? Well… maybe then it is time to say goodbye? It is time to start working on some serious skills!

Testing is serious business, already suffering from misunderstanding and underestimation by many who think they can automate all testing and everybody can test. In agile we are all developers and t-shaped people will rule the world. In 15 years there will be only developers doing everything: writing documentation, coding and testing… Yeah right! I wish I could believe that. Testing is HARD and needs a lot of study. As long as I see a vast majority of people not willing to study testing, I know I will have a job as a testing expert for the rest of my life!

This blogpost reflects some “rough ideas”. After the peer conference I will update this post with the ideas discussed in the room.

Categories: Blogs

Oh, Kay!

Hiccupps - James Thomas - Sat, 12/20/2014 - 08:28


Phil Kay is a stand-up comedian known for his love of live work and improvisation. In his interview for The Comedian's Comedian recently he said some things that resonated with me.

When he's talking about the impression others may have that there are rules of improvisation, I'm thinking about testing:
There's not a principle that I must avoid things I've done before ... There's plenty of room in the form for doing brand new things [but that's] not the aim, that I must do it brand new.When he's talking about how he constantly watches for and collects data that he hopes will come in useful later, that will help him to make connections and that will keep his mojo working when he's not on stage, I'm thinking about testing:
I write notes all the time ... anything interesting that comes to me ... but [the notes] are not the thing. The thing is the fact that I'm watching out for stuff ... like a boxer keeping loose ... on stage I hope they'll all come together.When he's talking about how not being tied to a prescribed structure opens up possibilities, I'm thinking about testing:Allow the best to be a thing that could happen.  If you're trying to enforce something, no best can ever happen.And when he talking about how it doesn't work sometimes, I'm still thinking about testing:
The list of traumatic failure gigs is so long ...  I accept the risk it'll go wrong.Looking around for related material I found that James Lyndsay has a workshop on Improvising for Testers, and Damian Synadinos has one specifically on the links between improv comedy and testing, Improv(e) Your Testing! Tips and Tricks from Jester to Tester. George Dinwiddie has also written about TDD and Improv.
Image: https://flic.kr/p/hquBik
Categories: Blogs

AutoMapper 3.3 feature: open generics

Jimmy Bogard - Sat, 12/20/2014 - 01:10

One of the interesting features of AutoMapper 3.3 is the ability to map open generic types. Open generics are those that don’t supply type parameters, like:

var openType = typeof(IEnumerable<>);

AutoMapper had some limited support for certain built-in open generics, but only the collection types. This changed in version 3.3, where you can now map any sort of open generic type:

public class Source<T> {
    public T Value { get; set; }
}

public class Destination<T> {
    public T Value { get; set; }
}

// Create the mapping
Mapper.CreateMap(typeof(Source<>), typeof(Destination<>));

Instead of using the normal syntax of the generic CreateMap method, you need to use the overload that takes type objects. This is because C# only accepts closed generic types as type parameters. This also means you can use all the configuration available for you to do member-specific mappings, but can only do them by referencing as a string instead of an expression. Not a limitation per se, but just something to be aware of.

To use the open generic mapping configuration, you can execute the mapping against a closed type:

var source = new Source<int> { Value = 10 };

var dest = Mapper.Map<Source<int>, Destination<int>>(source);

dest.Value.ShouldEqual(10);

Previously, I’d have to create maps for every closed type. With the 3.3 version, I can create map for the open type and AutoMapper can automatically figure out how to build a plan for the closed types from the open type configuration, including any customizations you’ve created.

Something that’s been asked for a while, but only recently have I figured out a clean way of implementing it. Interestingly enough, this feature is going to pave the way for programmatic, extensible conventions I’m targeting for 4.0.

Someday.

Post Footer automatically generated by Add Post Footer Plugin for wordpress.

Categories: Blogs

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today