Skip to content

Feed aggregator

100 Day Deep Work - Day 1: C# Namespaces

Yet another bloody blog - Mark Crowther - Wed, 03/01/2017 - 01:02


OK, so officially Day 1 of the #100DayDeepWork challenge inspired by Cal's Deep Work book and I've actually raced ahead of myself and done some of the proposed study out of order. Namely, the 1hr course by Mosh which was a good warm up, I should have thought of doing that first instead of something that takes 8 hours ;]

Better still, I was able to skip forward during his presentation as the material is very basic for me at at this point. Be sure to check the description section as he kindly provides the timings for each topic he discusses. You can also check out his website and courses too if you like his style (you will!).
Namespaces
One thing that came up from his video however was Namespaces. I've certainly used namespaces in my code before, heck Visual Studio automagically creates a namespace for you and as part of C# there no getting around using them. However, it's always interesting to note that you use these things without questioning, so I decided to do some Deep Work on Namespaces. There were two goo take-aways that are a bit more ingrained in my brain now:

  • Namespaces aren't arbitrary, they are describing the structure of the solution you're coding
  • They're hierarchical and so thought needs to be given to how you declare them
  • That hierarchy controls scope and so can be a powerful tool for your code design
Going ahead I'll be paying more attention to the use of Namespaces and how they appear in my code.
Nested NamespacesAs we create our namespace hierarchy we can declare them in a long form that we write out as below. For example;
namespace ExampleNamespace
{
class ExampleClass
{
public void ExampleMethod()
{
//some stuff here
}
}

// Then add a nested namespace
namespace TheNestedNamespace
In terms of declaring them I'm not going to use nested namespaces as that feel like a great way to over complicate the layout of code. Instead I'll follow the common shorthand approach.

Shorthand
The shorthand, and I think tidier way, of declaring them is more as we usually see:
using ExampleNameSpace.TheNestedNameSpaceThis seems much neater to me.

Alias
Another consideration is that if the nested namespace usings may become very long as we build our hierarchy out. A way around this is to use an Alias.

using MyAlias = ExampleNameSpace.TheNestedNameSpaceNow we can just type a line such as:
MyAlias.SomeMethod 
That's it for Day 1 of Deep Work. I had a question nagging me about iterating over multiple elements with Selenium so tomorrow I might work on that just so it's not distracting.

Mark

P.SThis is the book that gave the idea of the 100 Day Deep Work challenge, check it out!
--------------------------------------------------------------------------------------------------------------
Day 1: http://cyreath.blogspot.co.uk/2017/02/100-day-deep-work-day-1-c-namespaces.html
Day 0: http://cyreath.blogspot.co.uk/2017/02/100-day-deep-work-day-0-learning-plan.html


Categories: Blogs

Now on DevOps Radio: Poppin’ Fresh DevOps, Featuring General Mills DevOps Engineer, Sam Oyen

In the latest episode of DevOps Radio, Sam Oyen, DevOps engineer at General Mills, sits down with host Andre Pino to discuss how the company behind well-known icons such as the Pillsbury Doughboy and Betty Crocker is using DevOps. Sam talks about how she fell into DevOps, why she loves it, what she enjoys most about Jenkins World and concludes with some advice for women in the IT industry.

At General Mills, Sam is part of the team that manages all of .Net, Android and iOS applications for the entire organization. Sam’s team works on websites and related applications for brands like Pillsbury and Betty Crocker. The team, more than 130 developers worldwide, supports thousands of apps - from external apps to internal business apps and internal websites used for tracking data. Sam explains that each application requires developers to tailor the platform to meet specific needs. Using the Templates feature from CloudBees, the team is able to use one template for about 95% of their Jenkins jobs.

Sam was drawn to the DevOps field because of her love of problem solving and collaboration. It is these two concepts that she felt were exemplified through the “Ask the Experts” booth and white board stations at Jenkins World.

While Sam doesn’t feel there’s a big difference for men and women in DevOps, she does say it’s important for women to have allies. The biggest thing for both genders to embrace is that it’s okay – even good – to fail early and often. At first that seems counterintuitive, but the ability to fail fast is one of the value drivers for business as a result of continuous delivery processes and a DevOps culture.

Looking to upgrade your morning routine with something besides biscuits or toaster strudel? Check out the latest episode of DevOps Radio on the CloudBees website or on iTunes. Make sure you never miss an episode by subscribing to DevOps Radio via RSS feed. You can also join the conversation on Twitter by tweeting out to @CloudBees and including #DevOpsRadio in your post.

 

 

 

Blog Categories: Company NewsJenkins
Categories: Companies

Dynatrace Innovation: In 2016 we drew a line in the sand.

As I reflect on 2016 and move well into 2017, I couldn’t be more excited about our position and the year ahead. I believe that, with our focus on innovation, we’ve turned application performance management on its head, completely re-defining how the world’s best businesses will approach it from now on.

AI-powered, fully-automated, full-stack – that’s the future of APM. There’s no turning back now.

Moving beyond our technical achievements however, 2016 proved to be a year worth celebrating for many other reasons.

A big thanks to all our customers new and long-term who have put their faith in us. You’re the reason we strive to innovate and constantly push the APM envelope. And look at this retention rate. Not bad, eh?

Plus, we’re growing at a rapid rate:

In 2016 our new Dynatrace platform took off into the skies and we don’t expect it to come back down to earth. Cloud complexity and agile application environments will continue to drive exponential growth of our new APM platform, and we couldn’t be more excited.

And not only is our new Dynatrace platform taking off, but we continue to monitor and optimize more digital experiences than anyone else. Billions of visits (from millions of users connected to tens of thousands of applications) are better off because businesses the world over have Dynatrace at the monitoring helm.

But we’re not just a group of APM pioneers. We’re also a close-knit family of digital performance fanatics who happen to have celebrated new beginnings many times over this past year.

Now that’s a pretty great achievement.

And so is this collective milestone:

Suffice to say, it’s been a seriously great year here at Dynatrace.

A big thanks to all our customers, partners, employees, suppliers, peers, media friends, challengers and competitors. You keep us motivated and passionate every day of the year. Onward and upward in 2017!

The post Dynatrace Innovation: In 2016 we drew a line in the sand. appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

TestBash, Brighton, March 23-24 2017

Software Testing Magazine - Tue, 02/28/2017 - 09:00
TestBash is a two-day conference focused on software testing organized by the Ministry of Testing and taking place in Brighton. The first day is a workshop day and the second day is a single track...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

How DevOps Killed the Market for Software Composition Analysis

Sonatype Blog - Tue, 02/28/2017 - 08:00
The niche market for Software Composition Analysis (SCA) tools has died.  The culprit: DevOps. In today's world, developers are king.  Innovation is the throne upon which they sit.  Anything seen as an inhibitor to DevOps agility is the enemy, and therefore, must be terminated....

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

100 Day Deep Work - Day 0 - The Learning Plan

Yet another bloody blog - Mark Crowther - Tue, 02/28/2017 - 01:44


Yes, Day 0 of #100DayDeepWork was all about planning and it took place on a 2.5hr journey back to London. In truth I slept some of that time, but hey I still got the 90 minutes in!

You can pick up the book that inspired this or just carry on reading through the posts.

Following on from the post yesterday and looking back over James's infographic, I thought it would be wise to plan a little first. If mastery is the primary objective that, I feel, needs depth of understanding. As such I've planned out to work through a set of online tutorials and likely before they finish to then start cutting code by following a collection of YouTube tutorials I've collated.

The courses I'm going to work through are:


Once they're done, well actually as they're worked on and I feel the urge to cut code, I'll jump onto application and algorithm development. The idea is that by the time I've done the above I've started a) encountering material I'm already comfortable with and b) have seen more then one way to do the same thing.

Just be aware these phases of C# and Selenium will overlap. There's no need to just to C# or just do WebDriver after all. Indeed, the course by Nikolay starts with a C# primer. Right now I'm in an automation role looking purely at front end. Pretty basic then but a good warm up for to go deep diving.

After the above I will start layering in some Deep Learning of C# via sample app development. I created a playlist here: Learn C# via application development. These 8 videos will take about 5 hours or so to complete.

30+ Days so far
With stopping and starting videos, practice, etc. I'd estimate this is around 50 hours of Deep Learning minimum. That's about 33 days, but with skipping over the things that I'm happy with I'd say we have 30 days learning here.

OK, time to rest up and get onto Day 1 of #100DayDeepWork

Mark.


Categories: Blogs

Refactoring Towards Resilience: Process Manager Solution

Jimmy Bogard - Mon, 02/27/2017 - 23:30

Other posts in this series:

In the last post, we examined all of our coordination options as well as our process coupling to design a solution for our order processor. In my experience, it's this part of design async workflows that's by far the hardest. The code behind building these workflows is quite simple, but getting to that code takes asking tough questions and getting real answers from the business about how they want to handle the coupling and coordination options. There's no right or wrong in the answers, just tradeoffs.

To implement our process manager that will handle coordination/choreography with the external services, I'm going with NServiceBus. My biggest reason to do so is that NServiceBus, instead of being a heavyweight broker, acts as an implementor of most of the patterns listed in the Enterprise Integration Patterns catalog, and for nearly all my business cases I don't want to implement those patterns myself. As a refresher, our final design picture looks like:

We've already gone over the API/Task generation side, the final part is to build the process manager, the Stripe payment gateway, and event handlers (including SendGrid).

In terms of project structure, I still include Stripe as part of my overall order "service" boundary, so I have no qualms including it in the same solution as my process manager. With that in mind, let's look first at our order process manager, implemented as an NServiceBus saga.

Initial Order Submit

From the last post, we saw that the button click on the UI would create an order, but defer to backend processing for actual payments. Our process manager responds to the front-end command to start the order processing:

public async Task Handle(ProcessOrderCommand message,  
    IMessageHandlerContext context) {
    var order = await _db.Orders.FindAsync(message.OrderId);

    await context.Send(new ProcessPaymentCommand
    {
        OrderId = order.Id,
        Amount = order.Total
    });
}

When we receive the command to process the order, we send a command to our Stripe processor from our Saga, defined as:

public class OrderAcceptanceSaga : Saga<OrderAcceptanceData>,  
    IAmStartedByMessages<ProcessOrderCommand>,
    IHandleMessages<ProcessPaymentResult>
{
    private readonly OrdersContext _db;

    public OrderAcceptanceSaga(OrdersContext db)
    {
        _db = db;
    }
    protected override void ConfigureHowToFindSaga(
        SagaPropertyMapper<OrderAcceptanceData> mapper)
    {
        mapper.ConfigureMapping<ProcessOrderCommand>(m => m.OrderId);
    }

It doesn't seem like much in our process, we just turn around and send a command to Stripe, but that means that our front end has successfully recorded the order. With our initial command sent, let's check out our Stripe side.

Stripe processing

On the Stripe side, we said that payments are an Order service concern, which means I'm happy letting payments be a command. Between services, I prefer events, and internal to a service, commands are fine (events are fine too, I just prefer to coordinate/orchestrate inside a service).

We can implement a fairly straightforward Stripe handler, using a Stripe API NuGet package to help with the communication side:

public async Task Handle(ProcessPaymentCommand message,  
    IMessageHandlerContext context)
{
    var order = await _db.Orders.FindAsync(message.OrderId);

    var myCharge = new StripeChargeCreateOptions
    {
        Amount = Convert.ToInt32(order.Total * 100),
        Currency = "usd",
        Description = message.OrderId.ToString(),
        SourceCard = new SourceCard
        {
            /* get securely from order */
            Number = "4242424242424242",
            ExpirationYear = "2022",
            ExpirationMonth = "10",
        },
    };

    var requestOptions = new StripeRequestOptions
    {
        IdempotencyKey = message.OrderId.ToString()
    };

    var chargeService = new StripeChargeService();

    try
    {
        await chargeService.CreateAsync(myCharge, requestOptions);

        await context.Reply(new ProcessPaymentResult {Success = true});
    }
    catch (StripeException)
    {
        await context.Reply(new ProcessPaymentResult {Success = false});
    }
}

Most of this is fairly standard Stripe pieces, but the most important part is that when we call the Stripe API, we track success/failure and return a result appropriately. Additionally, we pass in the idempotency key based on the order ID so that if something goes completely wonky here and our message retries, we don't charge the customer twice.

We could get quite a bit more complicated here, looking at retries and the like but this is good enough for now and at least fulfills our goal of not accidentally charging the customer twice, or charging them and losing that information.

Handling the Stripe response

Back in our Saga, we need to handle the response from Stripe and perform any downstream actions. Now since we have this issue of the order successfully getting received but payment failing, we need to track that. I've handled this just by including a simple flag on the order and publishing a separate message:

public async Task Handle(ProcessPaymentResult message,  
    IMessageHandlerContext context)
{
    var order = await _db.Orders.FindAsync(Data.OrderId);

    if (message.Success)
    {
        order.PaymentSucceeded = true;

        await context.Publish(new OrderAcceptedEvent
        {
            OrderId = Data.OrderId,
            CustomerName = order.CustomerName,
            CustomerEmail = order.CustomerEmail
        });
    }
    else
    {
        order.PaymentSucceeded = false;

        await context.Publish(new OrderPaymentFailedEvent
        {
            OrderId = Data.OrderId
        });
    }

    await _db.SaveChangesAsync();

    MarkAsComplete();
}

Depending on the success or failure of the payment, I mark the order as payment succeeded and publish out a requisite event. Not that complicated, but this decoupling of the process from Stripe itself means that when I notify downstream systems, I'm only doing so after successfully processing the Stripe call (but not that the Stripe call itself was successful).

SendGrid event subscriber

Finally, our publishing of the OrderAcceptedEvent means we can build a subscriber to then send out the email to the customer that their order was successfully processed. Again, I'll use a NuGet package for the SendGrid API to do so:

public class OrderAcceptedHandler  
    : IHandleMessages<OrderAcceptedEvent>
{
    public Task Handle(OrderAcceptedEvent message, 
        IMessageHandlerContext context)
    {
        var apiKey = Environment.GetEnvironmentVariable("MY_RAD_SENDGRID_KEY");
        var client = new SendGridClient(apiKey);
        var msg = new SendGridMessage();

        msg.SetFrom(new EmailAddress("no-reply@my-awesome-store.com", "No Reply"));
        msg.AddTo(new EmailAddress(message.CustomerEmail, message.CustomerName));
        msg.SetTemplateId("0123abcd-fedc-abcd-9876-0123456789ab");
        msg.AddSubstitution("-name-", message.CustomerName);
        msg.AddSubstitution("-order-id-", message.OrderId.ToString());

        return client.SendEmailAsync(msg);
    }
}

Again, not too much excitement here, I'm just sending an email. The interesting part is the email sending is now temporally decoupled from my ordering process. In fact, email notifications are just another subscriber so we can easily imagine this sort of communication living not in the ordering service but perhaps a CRM service instead.

Wrapping it up

Our process we designed so far is pretty simple, just decoupling a few external processes from a button click. With an NServiceBus Saga in place to act as a process manager, our possibilities for more complex logic around the order acceptance process grow. We can retry payments, do more complicated order acceptance checks like fraud detection or address verification.

Regardless, we've addressed our initial problems in the distributed disaster we created earlier. It took quite a few more lines of code and more moving pieces, but that's always been my experience. Resilience is a feature, and one that has to be carefully considered and designed.

Categories: Blogs

Cluster-wide Copy Artifacts

CloudBees Jenkins Enterprise lets you operate many Client Masters (multiple Jenkins masters) from a central place: CloudBees Jenkins Operations Center.

This is, for example, very useful to be able to spread the load across teams, and leave teams to decide more freely which plugins they want to install, how they want to configure their jobs on their master, and so on.

Use case

When you start using multiple masters, and you are writing a deployment pipeline for example, you may need to reference artifacts coming from a build on another master.

This is now possible with the 2.7 release of CloudBees Jenkins Enterprise. A specific new Pipeline step is provided, and it is also supported on FreeStyle, Maven and Matrix job types.

How do I use it?

It is very straightforward. For a full explanation, please refer to the official documentation. You can use fine-grained options to select the build you need in the upstream job (e.g. the last build, stable or not, some build by its number, etc.).

From a Pipeline script

For example, let’s say I would like to get www.war file generated by the last completed build (i.e. even if failed, and will exclude the currently running builds) from the build-www-app-job job, located in the team-www-folder folder. And I want this to time out after a maximum of 1 hour, 2 minutes and 20 seconds. Here is how I could do it:

node('linux && x86') {
 copyRemoteArtifacts
   from: 'jenkins://41bd83b2f8fe36fea7d8b1a88f9a70f3/team-www-folder/build-www-app-job',
   includes: '**/target/www.war',
   selector: [$class: 'LastCompletedRemoteBuildSelector'],
   timeout: '1h 2m 20s'
}

In general, for such a complex case, it is strongly recommended to use the Pipeline Snippet Generator to generate the right code. See an illustration about that below:

From a FreeStyle Job

Just look for the new Copy archived artifacts from remote/local jobs step, then you will find a very similar UI to the one above in the Pipeline Snippet Generator:

And there’s more!

This is just a quick overview. To get the full picture, please refer to the official “Cluster-wide copy artifacts” documentation.

 

Blog Categories: Developer Zone
Categories: Companies

Inside the Black Box Part 2 – Python

This is the second topic of the “Black Box” series, bringing us to Python. For people interested in part one — Generated Code — please click here. In this post I’ll go through how to take a simple Python script and instrument it using the AppMon Native ADK. I will also shed some light on the most common pitfall when instrumenting Python scripts and how you might be able to work around it.

Let’s start with the example script. The code below is the complete script and while it isn’t doing much it will be able to show how to start the agent, how to create new PurePaths, how to capture function timings and arguments and more.

The handleRequest function is the entry point of the script and it prints out whatever argument you pass to it. It could be visualized as a web request handler, which is executed once with every web request. The handleRequest function calls the executeQuery function three times with different arguments, simulating a function executing database queries.

If we execute the above script, we get the following result printed out.

To instrument the above script we would first have to import the Dynatrace library by calling import dynatrace at the top of the script. The Dynatrace library is a Python wrapper which contains all the code needed for Python to communicate with the native ADK. The library is tested with Python 2.7 and 3.6 and can be downloaded from here.

The next step is to initialize the agent using dynatrace.init(). The initialization only needs to be executed once per process, so normally you would call that during the startup of the script. The init function will set up the connection to the AppMon collector and start monitoring the process. There is no need to call the uninitialized function at the end of the script, as the uninitialize function is registered by atexit, and will therefore be called automatically.

Within the Dynatrace library there are a set of default values on line 19 to 24.

If you need to change any of these values, for example the name of the agent, you can either change the default value within the Dynatrace library, or pass it as an argument to the init function.
dynatrace.init(agentName=“MyAgent”)

Once the agent is injected we will also need to add instrumentation into the code, for example to start PurePaths and capture functions. The two functions we will use for this are start_purepath and sensor. Both functions are contexts, so there is no need to call a function such as exit or end_purepath.

The start_purepath and sensor functions will not need to be passed information about the monitored function such as the function name, line number, file name and the arguments of the function. That is handled within the Dynatrace library using inspect. If you don’t want to use the automatically captured information you can also pass your own values. For example, if you would like to have a sensor with the name “mySensor” and the captured arguments “hello” and “world”, the function would look like this:
dynatrace.sensor(method=“mySensor”, params_to_capture=[“hello”, “world”])

This is how the script will look after we added the four lines.

By executing this script we receive the following PurePath in AppMon.

As you can see the instrumentation has automatically captured the function names, arguments, CPU/IO breakdown and name of the file. By looking at the details of the top node we can see that it also recorded the line number and full path to the file.

One important note regarding the instrumentation is what happens when your application uses forks to spawn several processes. The Native ADK registers one agent per process, meaning that if you create a new process for each request you would have to initialize the agent on every request. As this does not scale well you should tweak the forking to spawn a new process only once every 10.000 or so requests (depending on the load).

Within the Dynatrace library there are already functions created for linking the Python PurePath with PurePaths from other instrumented applications. The library is clear text and doesn’t require a separate compiler, so you can easily change the code if you need additional functions from the Native ADK.

Break open the black box and get in control of your applications, even if they are created in Python!

The post Inside the Black Box Part 2 – Python appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

From a Commodore 64 to DevSecOps

Sonatype Blog - Mon, 02/27/2017 - 16:49
We all know the story: a farm, a kid, a Commodore 64, and a modem maxing out at 300 bps. A few unexpected phone bills later, and young Ian Allison is figuring out how to game the system so he can keep using his newfound  gateway to the world of tech. According to Ian, that is where he began...

To read more, visit our blog at www.sonatype.org/nexus.
Categories: Companies

Commercial and Open Source JMeter Plugins

Software Testing Magazine - Mon, 02/27/2017 - 10:00
The Apache JMeter is an open source load testing tool developed by the Apache Foundation that can be used to test performance both on static and dynamic resources. It can be used to simulate a heavy...

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Categories: Communities

Reflecting on the Tech Lead Skills for Developers Course in Brazil

thekua.com@work - Sun, 02/26/2017 - 16:33

Earlier this month, I visited our Brazilian offices to run some internal training, called Tech Lead Skills for Developers. The trip felt a bit full circle as I had visited Brazil several years ago for the same reason and needed to develop the material. Instead of the handful of people I coached, I ran two full classes with a mix of people currently playing the Tech Lead role and those who might be stepping into the role.

The course I run uses a mix of training styles (short presentations, lots of time for story sharing, discussions, interactive exercises, brainstorm and lots of time for question time). In general I’m really happy with the overall result with a good balance of covering lots of material, making it personalised and relevant, and giving people an opportunity to practice, gather feedback and have a go at applying it. The feedback for the course was quite consistent with those in the past, telling me that the balance was just about right.

One of the great opportunities I have had, running this course in different places is seeing some of the cultural implications and differences between continents. I learned, for example, that Brazil (traditionally) has a higher Power Distance Index (PDI on the Hofstede Dimensions), which means that, at least compared to the United Kingdom or America, authority is viewed a bit more strictly. In practice, this meant that a lot of the developers, working in more collaborative environments seemed to almost take an extreme anti-leadership position, where any mark of authority was viewed poorly, or that there was a reluctance to be seen taking on a title.

I also discovered that the word delegate in Portuguese had a negative association. As we discussed how effective leaders scale themselves through effective delegation, it was almost interpreted as a manager telling people to take care of the bad tasks – which, of course, wasn’t the intent! In the end, I tried to express effective delegation as a way of ensuring that all important responsibilities were being taken care of.

I am running this course again later this year in both Thailand and Singapore and look forward to seeing some more of the cultural differences that emerge during the discussions.

Categories: Blogs

Coaching Testers : An approach for finding answers

Thinking Tester - Sat, 02/25/2017 - 18:32
Often, I get mails asking testers and budding testers asking questions and seeking my answers. Some of them are questions to something I wrote on my blog. Most of the questions are in the form "what is xxx" or "how to do yyy". 
Here is my advice/suggestion on how one should approach getting answers to the questions that they have on a given topic (this applies to any quest to know something).
Before I answer a question - I will ask you - what do you think? how will you find out? what information or facilitation you need to find answer to this question.
This is how James Bach challenged me when I used to ask him questions in the beginning. As James kept on pushing me back - I realized I must do some homework before ask. In the process, I learnt to find out myself some hints or pointers to question that I have and then seek help by asking "Here is a question" and "Here are my initial thoughts or pointers to this question". "Here is what I find contradicting or not-fitting in". "Here are the sources of information that I used". 
Most of the times - through this process of figuring out, you will get answers in 2-3 iterations without any external help. In this process of finding out - when you are stuck, ask yourself, what information do I need? how will get that information? 

Give it a try - you will learn to find answers to your questions yourself - that would be a fascinating journey.

Rice Consulting Announces Accreditation of New Certification Training Course for Testing Cyber Security

Press Release: For Immediate Release

Oklahoma City, OK, February 24, 2017:  Randall Rice, internationally-recognized author, consultant and trainer in software testing and cyber security testing is excited to announce the accreditation of his newest course, ISTQB Advanced Security Tester Certification Course.

This is a course designed for software testers and companies who are looking for effective ways to test the security measures in place in their organization. This course teaches people in-depth ways to find security flaws in their systems and organizations before they are discovered by hackers.

The course is based on the Advanced Security Tester Syllabus from the International Software Testing Qualifications Board (ISTQB), of which Randall Rice is chair of the Advanced Security Tester Syllabus working party.  The American Software Testing Qualifications Board (ASTQB) granted accreditation on Tuesday, February 21, 2017. Accreditation verifies that the course content covers the certification syllabus and glossary. In addition, the reviewers ensure that the course covers the materials at the levels indicated in the syllabus.

“With thousands of cyber attacks occurring on a daily basis against many businesses and corporations, it is urgent that companies have some way to know if their security defenses are actually working effectively. One reason we keep hearing about large data breaches is because companies are trusting too much in technology and are failing to test the defenses that are in place. Simply having firewalls and other defenses installed does not ensure security,” explained Randall Rice. “This course provides a holistic framework that people can use to find vulnerabilities in their systems and organizations. This framework addresses technology, people and processes used to achieve security.”

This course is currently available on an on-site basis, public courses and in online format. For further details, visit http://www.riceconsulting.com/home/index.php/ISTQB-Training-for-Software-Tester-Certification/istqb-advanced-security-tester-course.html. To schedule a course to be presented in your company, contact Randall Rice at 405-691-8075 or by e-mail.

Randall W. Rice, author and trainer of the course is a Certified Tester, Advanced Level and is on the Board of Directors of the ASTQB. He is the co-author with William E. Perry of two books, “Surviving the Top Ten Challenges of Software Testing” and “Testing Dirty Systems.”

Categories: Blogs

100 Day Deep Work - Mastering Automation

Yet another bloody blog - Mark Crowther - Fri, 02/24/2017 - 14:25
Hi All,
I recently caught a tweet to a blog post by James Willett (Twitter / Blog) where he mentioned the idea of doing a 100 day Deep Work Challenge. The basic idea of which is that over 100 days you do a 90 minute focused session to achieve a defined learning or productivity goal. It’s a great idea that I’ve decided to take up the challenge!
Now I haven’t read the book that James refers to, but hey, grab it via my Amazon link. I’ve Instead read the very informative blog post he created. Make sure to read it and have a look at the infographic he produced. While I recognise reading the book would probably be wise to read, I’m going to say I don’t need to as I already know what I want to study and having done similar challenges in the past, James’ post is a good enough guide.
Seriously, go read it http://james-willett.com/2017/02/the-100-day-deep-work-challenge/
So what’s my challenge?
A New Year’s ResolutionAt the start of 2017 I made a commitment to transforming my technical capability with automation – by the end of the year. Yes, I’ve been doing automation as an element of my delivery toolkit for about 5 years, but I’ve never felt I have the deep expertise that I have around testing. I’m happy that 90% of the time I am the best tester in the room. Not being arrogant, it’s just I’ve studied, written, presented, mentored, taught and applied what I do for the last 15+ years. I better be pretty good by now!
With automation however, I’ve always felt there’s a huge body of knowledge I have yet to acquire and a depth of expertise that I have a duty to possess when delivering automation to clients, that I don’t currently possess. That troubles me. My wife disagrees, saying I am probably better than I think. She may be right, but I know what level I want to achieve and how that looks in terms of delivery and I’m not there yet.
#100DayDeepWorkSo, to the Challenge. In summary, I’m going to focus on the deep learning and subsequent practical use of C#, Selenium WebDriver, SpecFlow (and so BDD) and Git. As I’m not paying for the SpecFlow+ Runner I’m going to generate reports using Pickles.
Let’s look in details at the 6 Rules James outlines in his blog post:
1) 90 Minutes everydayThat’s actually fine, I spend easily that each day studying generally anyway and though it’s a longish session the idea is that I accelerate the learning.Caveat – There’s a catch here, I am NOT doing this at weekends. Simply because we have a family agreement that I can work and study as hard as I like in the week, but weekends are for family. Laptop shut, 100% attention to family. No exceptions.
2) No distractionsAs Rule 3 stipulates doing Deep learning first, that’s fine as I’ll be locked in a room on my own
3) Deep Work firstThe Deep Work will be done first thing in the morning so that’s also just fine. It means getting up a notable amount of time earlier, but that just means I need to get to be earlier. Not a bad thing as it’ll stop me ‘ghosting’ around through the small hours as I often do.  I need to be out to work by 8.00am, so my start time is going to be 6am. Ugh, let’s see if I can keep that up!
4) Set an Overall GoalThe Goal to achieve is reasonably simple to prove as a friend and I have set up a new site called; www.TheSeleniumGuys.com where the goal is to provide a real back-to-basics and step-by-step series of posts and pages that allow newcomers to automation to get set-up and running with Selenium based automation. If that site isn’t content heavy by mid-year, you know I didn’t complete the challenge.
5) Summarise every sessionEvery session will be summarised on this blog, using the tag #100DayDeepWork and I’ll post a link on Twitter each day and sometimes on LinkedIn. Yep, no hiding if I succeed or fail. I’ll not only post the update about what I’m learning, I’ll share how the challenge is going generally.
6) Chart your ProgressI’m going to make a Calendar / Chart with the days showing, then publish it each day on this blog and link it via Twitter too. As per the Caveat in Step 1, that means I’ll achieve the 100 days in roughly 5 months. Feels like a long haul already.
There it is; 100 days of Deep Work, 100 Tweets, 100 Blog posts. Let’s see how this goes!
As a last thought – Let’s add a Good Cause into the mixBlog views and advert clicks off those posts generate revenue. My ad revenue is minimal, about a £1 a week on average. If you take the time to view the posts daily, you’ll generate ad revenue. If you see an ad you like then click it and they’ll be a bit extra generated. At the footer of each post I’ll add any affiliate links I have. Use them to generate affiliate revenue. 
At the end of the 100 days I’ll add up all revenue generated from thiscrazy project and donate it to a Charity you suggest + 50% from my own pocket :)
OK, onto the Deep Work!
Mark



Categories: Blogs

The Testing Kraftwerk

Hiccupps - James Thomas - Fri, 02/24/2017 - 10:20

If you're around testers or reading about testing it won't be long before someone mentions models. (Probably after context but some time before tacit knowledge.)

As a new tester in particular, you may find yourself asking what they are exactly, these models. It can be daunting when, having asked to see someone else's model, you are shown a complex flowchart, or a state diagram, or a stack of UML, a multi-coloured mindmap, or a barrage of blocked-out architectural components linked by complex arrangements of arrows with various degrees of dottedness.

But stay strong, my friend, because - while those things and many others can be models and can be useful - models are really just a way of describing a system, typically to aid understanding and often to permit predictions about how the system will behave under given conditions. What's more, the "system" need not be the entirety of whatever you're looking at nor all of the attributes of it.

It's part of the craft of testing to be able to build a model that suits the situation you are in at the time. For some web app, say, you could make a model of a text field, the dialog box it is in, the client application that launched it, the client-server architecture, or the hardware, software and comms stacks that support the client and server.

You can model different bits of the same system at the same time in different ways. And that can be powerful, for example when you realise that your models are inconsistent, because if that's the case, perhaps the system is inconsistent too ...

I'm a simple kind of chap and I like simple models, if I can get away with them. Here's a bunch of my favourite simple model structures and some simple ideas about when I might try to use them, rendered simply.

Horizontal LineYou're looking at some software in which events are triggered by other events. The order of the events is important to the correct functioning of the system. You could try to model this in numerous ways, but a simple way, a foothold, a first approximation, might be to simply draw a horizontal line and mark down the order you think things are happening in.


Well done. There's your model, of the temporal relationship between events. It's not sophisticated, but it represents what you think you know. Now test it by interacting with the system. Ah, you found out that you can alter the order. Bingo, your model was wrong, but now you can improve it. Add some additional horizontal lines to show relationships. Boom!

Edit: Synchronicity. On the day I published this post, Alex Kotliarsky published Plotting Ideas which also talks about how simple structures can help to understand, and extend understanding of, a space. The example given is a horizontal line being used to model types of automated testing.

Vertical PileSo horizontal lines are great, sure, but let's not leave the vertical out of it. While horizontal seems reasonably natural for temporal data, vertical fits nicely with stacks. That might be technology stacks, or call sequences, process phases, or something else.

Here's an example showing how some calls to a web server go through different libraries, and which might be a way in to understanding why some responses conform to HTTP standards and some don't. (Clue: the ones that don't are the ones you hacked up yourself.)


Scatter PlotCombine your horizontal and vertical and you've got a plane on which to plot a couple of variables. Imagine that you're wondering how responsiveness of your application varies with the number of objects created in its database. You run the experiments and you plot the results.


If you have a couple of different builds you might use different symbols to plot them both on the same chart, effectively increasing its dimensionality. Shape, size, annotations, and more can add additional dimensions.

Now you have your chart you can see where you have data and you can begin to wonder about the behaviour in those areas where you have no data. You can then arrange experiments to fill them, or use your developing understanding of the application to predict them. (And then consider testing your prediction, right?)

Just two lines and a few dots, a biro and a scrap of paper. This is your model, ladies and gentlemen.

TableA picture is worth a thousand words, they say. A table can hold its own in that company. When confronted with a mass of text describing how similar things behave in different ways under similar conditions I will often reach for a table so that I can compare like with like, and see the whole space in one view. This kind of approach fits well when there are several things that you want to compare in several dimensions.

In this picture, I'm imagining that I've taken written reports about the work that was done to test some versions of a piece of software against successive versions of the same specification. As large blocks of text, the comparisons are hard to make. Laid out as a table I have visibility of the data and I have the makings of a model of the test coverage.


The patterns that this exposes might be interesting. Also, the places that there are gaps might be interesting. Sometimes those gaps highlight things that were missed in the description, sometimes they're disallowed data points, sometimes they were missed in the analysis. And sometimes they point to an error in the labels. Who knows, this time? Well, you will soon. Because you've seen that the gaps are there you can go and find out, can't you?

I could have increased the data density of this table in various ways. I could have put traffic lights in each populated cell to give some idea of the risk highlighted by the testing done, for example. But I didn't. Because I didn't need to yet and didn't think I'd want to and it'd take more time.

Sometimes that's the right decision and sometimes not. You rarely know for sure. Models themselves, and the act of model building, are part of your exploratory toolkit and subject to the same kinds of cost/value trade-offs as everything else.

A special mention here for Truth tables which I frequently find myself using to model inputs and corresponding outcomes, and which tester isn't fascinated by those two little blighters?

CircleThe simple circle. Once drawn you have a bipartition, two classes. Inside and outside. Which of the users of our system run vi and Emacs? What's that? Johnny is in both camps? Houston, we have a problem.


This is essentially a two variable model, so why wouldn't we use a scatter plot? Good question. In this case, to start with I wasn't so interested in understanding the extent of vi use against Emacs use for a given user base. My starting assumption was that our users are members of one editor religion or another and I want to see who belongs in each set. The circle gives me that. (I also used a circle model for separating work I will do from work I won't do in Put a Ring on It.)

But it also brings Johnny into the open. The model has exposed my incorrect assumption. If Johnny had happened not to be in my data set, then my model would fit my assumptions and I might happily continue to predict that new users would fall into one of the two camps.

Implicit in that last paragraph are other assumptions, for example that the data is good, and that it is plotted accurately. It's important to remember that models are not the thing that they model. When you see something that looks unexpected in your model, you will usefully ask yourself these kinds of questions:

  • is the system wrong?
  • is the data wrong?
  • is the model wrong?
  • is my interpretation wrong?
  • ...
Venn DiagramThe circle's elder sister. Where the circle makes two sets, the Venn makes arbtrarily many. I used a Venn diagram only this week - the spur for this post, as it happens - to model a collection of text filters whose functionality overlaps. I wanted to understand which filters overlapped with each other. This is where I got to:


In this case I also used the size of the circles as an additional visual aid. I think filter A has more scope than any of the others so I made it much larger. (I also used a kind of Venn diagram model of my testing space in Your Testing is a Joke.)

And now I have something that I can pass on to others on my team - which I did - and perhaps we can treat each of the areas on the diagram as an initial stab at a set of equivalence classes that might serve useful when testing this component.

In this post, I've given a small set of model types that I use frequently. I don't think that any of the examples I've given couldn't be modelled another way and on any given day I might have modelled them other ways. In fact, I will often hop between attempts to model a system using different types as a way to provoke thought, to provide another perspective, to find a way in to the problem I'm looking at.

And having written that last sentence I now see that this blog post is the beginnings of a model of how I use models. But sometimes that's the way it works too - the model is an emergent property of the investigation and then feeds back into the investigation. It's all part of the craft.
Image: In Deep Music Archive

Edit: While later seeking some software to draw a more complex version of the Venn Diagram model I found out that what I've actually drawn here is an Euler Diagram.
Categories: Blogs

Blue Ocean Dev Log: February Week #4

We’re counting down the weeks until Blue Ocean 1.0. In all the excitement I forgot to post a dev log last week, so I will make up for it this week. In the last 10 days, 2 betas went out: b22 and b23, and a preview release of the editor. We expect the next release will be named a release candidate (we know there is still more to go in, but want to signal that things are getting into the final stages!). The Gitter chat room is getting busier, so join in! Also last week, the Blue Ocean Pipeline Editor was presented at the Jenkins Online Meetup, embedded below. Feature Highlights You can...
Categories: Open Source

Build a Global Continuous Delivery Practice - it is Easy as of Today!

Today Starts a New Era for Companies Who Want to Setup a Global Continuous Delivery Practice.

In the last few years, CloudBees has witnessed first hand the evolution and adoption of DevOps and continuous delivery (CD) in organizations. 

Originally, most of our discussions were “Jenkins” discussions. Teams within organizations had made the decision to use Jenkins as their de facto tool for continuous integration (CI) and/or continuous delivery (CD). As Jenkins became their unique gateway to production (i.e. anything that lands in production has to travel through a Jenkins pipeline to get there), Jenkins became as critical as production itself for them: if you can’t upgrade or fix anything into production, you have a big (production) problem! To make those teams successful, we provided a number of extensions on top of Jenkins (such as role-based access control and other features), as well as 24/7 support backed by our worldwide team of Jenkins experts. This is today an extraordinarily successful CloudBees offering that helps hundreds of teams and thousands of users around the globe operate a rock-solid Jenkins cluster.

In the last few years, however, the tone of these discussions has changed. We are now meeting with a lot of enterprises that are looking at building a formal continuous delivery “practice” in their organization. They want to standardize the way continuous delivery happens, across the board. They want to be able to compare the productivity and velocity of all of their teams. For them, gone are the days of team-specific continuous delivery solutions. They have learned a lot through what leading-edge teams have done, they have setup proof of concepts and they are now ready to leverage their critical mass to formalize, at scale, the best practices that fit their business.

What they are looking for is a single, unified continuous delivery solution that gives them visibility into all of their teams and applications, this, in turn, requires a platform that knows how to integrate with legacy, traditional and leading edge environments, from AIX to Docker on AWS - not one different CD solution per project or technology of the day! If speed and agility matter for individual applications, it certainly matters to the IT organization itself! As such, they can’t afford to have a one month lag time anytime they onboard a new team. They can’t even afford one day. They want to onboard new teams or new projects in a snap and give them a best of breed environment to build those delivery pipelines. Also, they want a platform that’s cost efficient. Efficient, both in terms of how it manages the underlying infrastructure at scale, and also in how much (or, rather little) work is involved in managing the platform itself. 

Consequently, in order to fulfill that need, CloudBees is launching today CloudBees Jenkins Enterprise, the first and only platform that enables continuous delivery at scale for enterprises, based on the de facto DevOps hub, Jenkins. 

CloudBees Jenkins Enterprise is a full-fledged platform that can be deployed anywhere: Linux, VMware, OpenStack, AWS, to name a few. It takes ownership of the provided infrastructure and provides a fully-managed Continuous Environment built on Jenkins. Based on Docker containers, CloudBees Jenkins Enterprise provides a self-service, elastic CD environment that can be centrally managed. It also enables enterprises to set up global policies and best practices that can be enforced among all teams across the organization. Furthermore, the platform automatically handles backup and restore, automatically detects faulty behaviors and properly recovers from those situations. This leads to a continuous delivery platform with a very low cost of maintenance and an excellent usage of the infrastructure through high-density Jenkins deployments that can readily scale up to thousands of teams, and tens of thousands of projects and users. 

If you are interested to know more, I’d suggest reading the excellent blog post by Brian Dawson, Product Marketing Manager at CloudBees.

Onward,

Sacha

Blog Categories: Company News
Categories: Companies

DevOps from the Front Lines: Enterprise Adoption Principles and Pitfalls

So, what do I think of when I’m told a company wants to adopt DevOps? The first thing that comes to mind is the size of the organization, and how far they want to take DevOps best practices. I really want to know what DevOps adoption will mean for the company.

In my experience, it gets especially interesting for large organizations that rely on a lot of applications and teams. There are important principles that support adoption. First, adoption should include a clear directive of how people and processes will be organized to manage application lifecycles and their interdependencies. Adoption also includes selecting the optimal technologies to manage the lifecycle and pipeline.

Next, DevOps governance should specifically address:

  • Cloud-based infrastructure services – both private and public options
  • Development Workstation Standards and Images
  • Source Control Standards
  • Continuous Integration
  • Continuous Delivery
  • Testing methods and required coverage
  • Configuration Management
  • Continuous Monitoring
  • Standardized Method of Delivering Information for each applications Business Health, Operational Health, Testing Health, Development Health and Pipeline Health

Finally, leadership must carefully select the right applications in the enterprise that will truly benefit from DevOps adoption.

Now that we have an outline of how to structure adoption, let’s look at four of the biggest pitfalls that I’ve seen:

No Comprehensive Plan

This is the big one. IT executives usually have the ambition, but they often lack commitment to a clear, well-defined design for the comprehensive transitional approach that DevOps requires. Without a single top-down approach and sponsorship, DevOps initiatives face crippling disconnects across critical stakeholders such as infrastructure services, testing, project management, development and operations. Without a unified plan, everybody ends up with different—often conflicting—priorities. The DevOps team is broken into factions. Championing around one common cause for the necessary application is impossible.

Tell-tale symptoms of fragmentation include:

  1. Cloud Services aren’t available to the application team.
  2. Configuration Management isn’t used.
  3. Feedback loops only work for certain members of certain groups—not across teams.
  4. Development doesn’t prioritize necessary DevOps tasks.
  5. Agile is only employed by development–not throughout the entire pipeline.
  6. Department silos stifle innovation.
  7. Isolated individual groups spend too much time proving to other silos why DevOps is needed.
No Cross-Functional Training

Many companies suffer from a lack of training on technologies and processes that are a part of the DevOps initiative. In nearly all cases I’ve seen, the application team is cross-functional, but people from other areas simply fill defined, siloed roles. When organizations don’t educate everyone on cross-functional processes and technologies, DevOps can die a slow, painful death.

Missing DevOps Roles

This problem follows directly from the first two pitfalls, but it can go unnoticed until trouble strikes.  A lack of planning and training means that some critical roles and responsibilities are simply left out of the org chart for the DevOps team. Without appropriate divisions of labor and chains of responsibility, DevOps is impossible. A well-defined cross-functional agile team should be formed around an application pipeline rather than multiple teams representing each of their corresponding departments.

Information Stuck in Silos

Organizations may have many valuable reports, dashboards and data that they’ve created for the application or pipeline within operations, for testing results, and/or showing development status—among other things. Along with these sources of information, groups may have improved processes within these silos, too. However, without some consistent and agreed upon standard central way to socialize and/or regularly communicate these efforts, individual team contributions remain just that--individual. This prevents useful information from making a big impact across the entire pipeline. Siloed insight means siloed innovation and improvement.

This is just a brief snapshot of a few strategies to use and pitfalls to avoid when adopting DevOps. Just remember that the best adoption initiatives start at the top. They’re backed by committed, prioritized and coordinated teams across all functional areas.

The post DevOps from the Front Lines: Enterprise Adoption Principles and Pitfalls appeared first on Dynatrace blog – monitoring redefined.

Categories: Companies

5 Principles of Agile Testing & How Ranorex Fits In

Ranorex - Thu, 02/23/2017 - 15:39

Agile has changed the way we approach development and testing. At least it should. Release cycles are shorter, requirements change rapidly and quality standards are higher than they’ve ever been. To have the slightest chance at scoring a medal in the agile testing games, we’ve got to understand the rules of the game and get the right tools. So let’s get at it together.

I am trying to break down the requirements agile testing should meet on five basic principles. Based on these, I’ll show you the benefits we’ve experienced ourselves when using Ranorex for agile testing and how your team will also benefit from using Ranorex for agile test automation.

Fast feedback

Best case scenario: You test the full range of functionality for every single code change. When performing continuous testing, it’s essential that the whole development team gets immediate feedback on the executed tests. Therefore, an easy to access and easy to read reporting mechanism is indispensable. This is where our test reports step in. You can fully integrate the lightweight test executables of our test suites into any continuous integration systems and share the resulting easily understandable report files with the whole team to ensure a high level of transparency. Not fast enough? You can also get live feedback during test execution: Simply open the test report while the test still runs to see if any test cases have failed so far. The earlier you know, the faster you can react.

Progressive Report

High level of automation

Fact is that manual testing is slow, labor intensive, inconsistent and buggy. If you want to rapidly respond to changing requirements and constant code changes, a high-level of automation is absolutely essential. Next to basic unit tests, acceptance tests and integration tests are highly important to test the full range of functionality. To reduce time and increase quality, test automation has to be an integral part of the project from the beginning. You can use Ranorex Studio to automate your entire range of UI tests.

High Level of Automation

Low overhead

Sometimes, it make sense to create a simple throwaway test, which is used only in one specific sprint. In these cases, there’s neither time nor resources for a big setup. Using Ranorex Studio, you can easily automate such tests and integrate them in your existing test environment. The lightweight test automation projects of Ranorex result in executable files, which you can directly trigger from command line. The executable files inform you if the test has failed or succeeded. In addition a report file can provide detailed information about the test run. Perfect conditions to integrate into any continuous integration process.

Lightweight-Execution

Termination of testing roles

In an agile software development process, the whole team is responsible for quality. The borders between the traditional understanding of testers and developers blur. So, typical testers should be able to write unit tests or simple integration tests. In contrast, developers should also record UI tests. Using the broad set of tools ideal for all skill sets, you can set ideal conditions for developers and testers to work together on projects. Pick and choose – it’s up to you: Easily create script-free automated tests using the Ranorex Recorder, or quickly create and edit tests entirely in C# or VB.Net.

One Tool Different Skill Sets

Termination of testing phase

Finally in agile iteration cycles, there’s typically no time for a sequential processing of all testing levels (unit tests, integration tests, system tests, acceptance tests…). Therefore test automation has to be an integral part of the project from the beginning. As Ranorex Studio does not rely on dependencies, you can create test scenarios at any time. As soon as you know how the UI will look like, you can implement an automated test and refine the detailed paths of the repository elements as you go along.

UI Test Driven Approach

As a result we’ve seen, agile testing is based on a high level of test automation. To set the ideal conditions for your team, make sure to get a tool, or a set of tools, that enable you to directly integrate automated testing in your continuous delivery pipeline, ensure a high level of transparency and enable collaboration within your team.

See the benefits for yourself

The post 5 Principles of Agile Testing & How Ranorex Fits In appeared first on Ranorex Blog.

Categories: Companies

Knowledge Sharing

SpiraTest is the most powerful and affordable test management solution on the market today