Here is my advice/suggestion on how one should approach getting answers to the questions that they have on a given topic (this applies to any quest to know something).
Before I answer a question - I will ask you - what do you think? how will you find out? what information or facilitation you need to find answer to this question.
This is how James Bach challenged me when I used to ask him questions in the beginning. As James kept on pushing me back - I realized I must do some homework before ask. In the process, I learnt to find out myself some hints or pointers to question that I have and then seek help by asking "Here is a question" and "Here are my initial thoughts or pointers to this question". "Here is what I find contradicting or not-fitting in". "Here are the sources of information that I used".
Most of the times - through this process of figuring out, you will get answers in 2-3 iterations without any external help. In this process of finding out - when you are stuck, ask yourself, what information do I need? how will get that information?
Give it a try - you will learn to find answers to your questions yourself - that would be a fascinating journey.
Rice Consulting Announces Accreditation of New Certification Training Course for Testing Cyber Security
Oklahoma City, OK, February 24, 2017: Randall Rice, internally-recognized author, consultant and trainer in software testing and cyber security testing is excited to announce the accreditation of his newest course, ISTQB Advanced Security Tester Certification Course.
This is a course designed for software testers and companies who are looking for effective ways to test the security measures in place in their organization. This course teaches people in-depth ways to find security flaws in their systems and organizations before they are discovered by hackers.
The course is based on the Advanced Security Tester Syllabus from the International Software Testing Qualifications Board (ISTQB), of which Randall Rice is chair of the Advanced Security Tester Syllabus working party. The American Software Testing Qualifications Board (ASTQB) granted accreditation on Tuesday, February 21, 2017. Accreditation verifies that the course content covers the certification syllabus and glossary. In addition, the reviewers ensure that the course covers the materials at the levels indicated in the syllabus.
“With thousands of cyber attacks occurring on a daily basis against many businesses and corporations, it is urgent that companies have some way to know if their security defenses are actually working effectively. One reason we keep hearing about large data breaches is because companies are trusting too much in technology and are failing to test the defenses that are in place. Simply having firewalls and other defenses installed does not ensure security,” explained Randall Rice. “This course provides a holistic framework that people can use to find vulnerabilities in their systems and organizations. This framework addresses technology, people and processes used to achieve security.”
This course is currently available on an on-site basis, public courses and in online format. For further details, visit http://www.riceconsulting.com/home/index.php/ISTQB-Training-for-Software-Tester-Certification/istqb-advanced-security-tester-course.html. To schedule a course to be presented in your company, contact Randall Rice at 405-691-8075 or by e-mail.
Randall W. Rice, author and trainer of the course is a Certified Tester, Advanced Level and is on the Board of Directors of the ASTQB. He is the co-author with William E. Perry of two books, “Surviving the Top Ten Challenges of Software Testing” and “Testing Dirty Systems.”
I recently caught a tweet to a blog post by James Willett (Twitter / Blog) where he mentioned the idea of doing a 100 day Deep Work Challenge. The basic idea of which is that over 100 days you do a 90 minute focused session to achieve a defined learning or productivity goal. It’s a great idea that I’ve decided to take up the challenge!
Now I haven’t read the book that James refers to, but hey, grab it via my Amazon link. I’ve Instead read the very informative blog post he created. Make sure to read it and have a look at the infographic he produced. While I recognise reading the book would probably be wise to read, I’m going to say I don’t need to as I already know what I want to study and having done similar challenges in the past, James’ post is a good enough guide.
Seriously, go read it http://james-willett.com/2017/02/the-100-day-deep-work-challenge/
So what’s my challenge?
A New Year’s ResolutionAt the start of 2017 I made a commitment to transforming my technical capability with automation – by the end of the year. Yes, I’ve been doing automation as an element of my delivery toolkit for about 5 years, but I’ve never felt I have the deep expertise that I have around testing. I’m happy that 90% of the time I am the best tester in the room. Not being arrogant, it’s just I’ve studied, written, presented, mentored, taught and applied what I do for the last 15+ years. I better be pretty good by now!
With automation however, I’ve always felt there’s a huge body of knowledge I have yet to acquire and a depth of expertise that I have a duty to possess when delivering automation to clients, that I don’t currently possess. That troubles me. My wife disagrees, saying I am probably better than I think. She may be right, but I know what level I want to achieve and how that looks in terms of delivery and I’m not there yet.
#100DayDeepWorkSo, to the Challenge. In summary, I’m going to focus on the deep learning and subsequent practical use of C#, Selenium WebDriver, SpecFlow (and so BDD) and Git. As I’m not paying for the SpecFlow+ Runner I’m going to generate reports using Pickles.
Let’s look in details at the 6 Rules James outlines in his blog post:
1) 90 Minutes everydayThat’s actually fine, I spend easily that each day studying generally anyway and though it’s a longish session the idea is that I accelerate the learning.Caveat – There’s a catch here, I am NOT doing this at weekends. Simply because we have a family agreement that I can work and study as hard as I like in the week, but weekends are for family. Laptop shut, 100% attention to family. No exceptions.
2) No distractionsAs Rule 3 stipulates doing Deep learning first, that’s fine as I’ll be locked in a room on my own
3) Deep Work firstThe Deep Work will be done first thing in the morning so that’s also just fine. It means getting up a notable amount of time earlier, but that just means I need to get to be earlier. Not a bad thing as it’ll stop me ‘ghosting’ around through the small hours as I often do. I need to be out to work by 8.00am, so my start time is going to be 6am. Ugh, let’s see if I can keep that up!
4) Set an Overall GoalThe Goal to achieve is reasonably simple to prove as a friend and I have set up a new site called; www.TheSeleniumGuys.com where the goal is to provide a real back-to-basics and step-by-step series of posts and pages that allow newcomers to automation to get set-up and running with Selenium based automation. If that site isn’t content heavy by mid-year, you know I didn’t complete the challenge.
5) Summarise every sessionEvery session will be summarised on this blog, using the tag #100DayDeepWork and I’ll post a link on Twitter each day and sometimes on LinkedIn. Yep, no hiding if I succeed or fail. I’ll not only post the update about what I’m learning, I’ll share how the challenge is going generally.
6) Chart your ProgressI’m going to make a Calendar / Chart with the days showing, then publish it each day on this blog and link it via Twitter too. As per the Caveat in Step 1, that means I’ll achieve the 100 days in roughly 5 months. Feels like a long haul already.
There it is; 100 days of Deep Work, 100 Tweets, 100 Blog posts. Let’s see how this goes!
As a last thought – Let’s add a Good Cause into the mixBlog views and advert clicks off those posts generate revenue. My ad revenue is minimal, about a £1 a week on average. If you take the time to view the posts daily, you’ll generate ad revenue. If you see an ad you like then click it and they’ll be a bit extra generated. At the footer of each post I’ll add any affiliate links I have. Use them to generate affiliate revenue.
At the end of the 100 days I’ll add up all revenue generated from this crazy project and donate it to a Charity you suggest + 50% from my own pocket :)
OK, onto the Deep Work!
If you're around testers or reading about testing it won't be long before someone mentions models. (Probably after context but some time before tacit knowledge.)
As a new tester in particular, you may find yourself asking what they are exactly, these models. It can be daunting when, having asked to see someone else's model, you are shown a complex flowchart, or a state diagram, or a stack of UML, a multi-coloured mindmap, or a barrage of blocked-out architectural components linked by complex arrangements of arrows with various degrees of dottedness.
But stay strong, my friend, because - while those things and many others can be models and can be useful - models are really just a way of describing a system, typically to aid understanding and often to permit predictions about how the system will behave under given conditions. What's more, the "system" need not be the entirety of whatever you're looking at nor all of the attributes of it.
It's part of the craft of testing to be able to build a model that suits the situation you are in at the time. For some web app, say, you could make a model of a text field, the dialog box it is in, the client application that launched it, the client-server architecture, or the hardware, software and comms stacks that support the client and server.
You can model different bits of the same system at the same time in different ways. And that can be powerful, for example when you realise that your models are inconsistent, because if that's the case, perhaps the system is inconsistent too ...
I'm a simple kind of chap and I like simple models, if I can get away with them. Here's a bunch of my favourite simple model structures and some simple ideas about when I might try to use them, rendered simply.
Horizontal LineYou're looking at some software in which events are triggered by other events. The order of the events is important to the correct functioning of the system. You could try to model this in numerous ways, but a simple way, a foothold, a first approximation, might be to simply draw a horizontal line and mark down the order you think things are happening in.
Well done. There's your model, of the temporal relationship between events. It's not sophisticated, but it represents what you think you know. Now test it by interacting with the system. Ah, you found out that you can alter the order. Bingo, your model was wrong, but now you can improve it. Add some additional horizontal lines to show relationships. Boom!
Vertical PileSo horizontal lines are great, sure, but let's not leave the vertical out of it. While horizontal seems reasonably natural for temporal data, vertical fits nicely with stacks. That might be technology stacks, or call sequences, process phases, or something else.
Here's an example showing how some calls to a web server go through different libraries, and which might be a way in to understanding why some responses conform to HTTP standards and some don't. (Clue: the ones that don't are the ones you hacked up yourself.)
Scatter PlotCombine your horizontal and vertical and you've got a plane on which to plot a couple of variables. Imagine that you're wondering how responsiveness of your application varies with the number of objects created in its database. You run the experiments and you plot the results.
If you have a couple of different builds you might use different symbols to plot them both on the same chart, effectively increasing its dimensionality. Shape, size, annotations, and more can add additional dimensions.
Now you have your chart you can see where you have data and you can begin to wonder about the behaviour in those areas where you have no data. You can then arrange experiments to fill them, or use your developing understanding of the application to predict them. (And then consider testing your prediction, right?)
Just two lines and a few dots, a biro and a scrap of paper. This is your model, ladies and gentlemen.
TableA picture is worth a thousand words, they say. A table can hold its own in that company. When confronted with a mass of text describing how similar things behave in different ways under similar conditions I will often reach for a table so that I can compare like with like, and see the whole space in one view. This kind of approach fits well when there are several things that you want to compare in several dimensions.
In this picture, I'm imagining that I've taken written reports about the work that was done to test some versions of a piece of software against successive versions of the same specification. As large blocks of text, the comparisons are hard to make. Laid out as a table I have visibility of the data and I have the makings of a model of the test coverage.
The patterns that this exposes might be interesting. Also, the places that there are gaps might be interesting. Sometimes those gaps highlight things that were missed in the description, sometimes they're disallowed data points, sometimes they were missed in the analysis. And sometimes they point to an error in the labels. Who knows, this time? Well, you will soon. Because you've seen that the gaps are there you can go and find out, can't you?
I could have increased the data density of this table in various ways. I could have put traffic lights in each populated cell to give some idea of the risk highlighted by the testing done, for example. But I didn't. Because I didn't need to yet and didn't think I'd want to and it'd take more time.
Sometimes that's the right decision and sometimes not. You rarely know for sure. Models themselves, and the act of model building, are part of your exploratory toolkit and subject to the same kinds of cost/value trade-offs as everything else.
A special mention here for Truth tables which I frequently find myself using to model inputs and corresponding outcomes, and which tester isn't fascinated by those two little blighters?
CircleThe simple circle. Once drawn you have a bipartition, two classes. Inside and outside. Which of the users of our system run vi and Emacs? What's that? Johnny is in both camps? Houston, we have a problem.
This is essentially a two variable model, so why wouldn't we use a scatter plot? Good question. In this case, to start with I wasn't so interested in understanding the extent of vi use against Emacs use for a given user base. My starting assumption was that our users are members of one editor religion or another and I want to see who belongs in each set. The circle gives me that. (I also used a circle model for separating work I will do from work I won't do in Put a Ring on It.)
But it also brings Johnny into the open. The model has exposed my incorrect assumption. If Johnny had happened not to be in my data set, then my model would fit my assumptions and I might happily continue to predict that new users would fall into one of the two camps.
Implicit in that last paragraph are other assumptions, for example that the data is good, and that it is plotted accurately. It's important to remember that models are not the thing that they model. When you see something that looks unexpected in your model, you will usefully ask yourself these kinds of questions:
- is the system wrong?
- is the data wrong?
- is the model wrong?
- is my interpretation wrong?
In this case I also used the size of the circles as an additional visual aid. I think filter A has more scope than any of the others so I made it much larger. (I also used a kind of Venn diagram model of my testing space in Your Testing is a Joke.)
And now I have something that I can pass on to others on my team - which I did - and perhaps we can treat each of the areas on the diagram as an initial stab at a set of equivalence classes that might serve useful when testing this component.
In this post, I've given a small set of model types that I use frequently. I don't think that any of the examples I've given couldn't be modelled another way and on any given day I might have modelled them other ways. In fact, I will often hop between attempts to model a system using different types as a way to provoke thought, to provide another perspective, to find a way in to the problem I'm looking at.
And having written that last sentence I now see that this blog post is the beginnings of a model of how I use models. But sometimes that's the way it works too - the model is an emergent property of the investigation and then feeds back into the investigation. It's all part of the craft.
Image: In Deep Music Archive