Skip to content

Blogs

Ask Me Another

Hiccupps - James Thomas - Sat, 11/22/2014 - 07:59
I just wrote a LinkedIn recommendation for one of my team who's leaving Cambridge in the new year. It included this phrase
unafraid of the difficult (to ask and often answer!) questions And he's not the only one. Questions are a tester's stock-in-trade, but what kinds of factors can make them difficult to ask? Here's some starters:
  • the questions are hard to frame because the subject matter is hard to understand
  • the questions have known answers, but none are attractive 
  • the questions don't have any known answers
  • the questions are unlikely to have any answers
  • the questions put the credibility of the questionee at risk
  • the questions put the credibility of the questioner at risk
  • the questions put the credibility of shared beliefs, plans or assumptions at risk
  • the questions challenge someone further up the company hierarchy
  • the questions are in a sensitive area - socially, personally, morally or otherwise
  • the questions are outside the questioner's perceived area of concern or responsibility
  • the questioner fears the answer
  • the questioner fears that the question would reveal some information they would prefer hidden
  • the questioner isn't sure who to ask the question of
  • the questioner can see that others who could are not asking the question
  • the questioner has found that questions of this type are not answered
  • the questioner lacks credibility in the area of the question
  • the questioner lacks confidence in their ability to question this area
  • the questionee is expected not to want to answer the question
  • the questionee is expected not to know the answer
  • the questionee never answers questions
  • the questionee responds negatively to questions (and the questioner)
  • the questionee is likely interpret the question as implied criticism or lack of knowledge
Some of these - or their analogues - are also reasons for a question being difficult to answer but here's a few more in that direction*:
  • the answer will not satisfy the questioner, or someone they care about
  • the answer is known but cannot be given
  • the answer is known to be incorrect or deliberately misleading
  • the answer is unknown
  • the answer is unknown but some answer is required
  • the answer is clearly insufficient
  • the answer would expose something that the questionee would prefer hidden
  • the answer to a related question could expose something the questionee would prefer hidden
  • the questioner is difficult to satisfy
  • the questionee doesn't understand the question
  • the questionee doesn't understand the relevance of the question
  • the questionee doesn't recognise that there is a question to answer
Much as I could often do without them - they're hard! - I welcome and credit difficult questions. 
Why?

Because they'll make me think, suggest that I might reconsider, force me to understand what my point of view on something actually is. Because they expose contradictions and vagueness, throw light onto dark corners, open up new possibilities by suggesting that there may be answers other than those already thought of, or those that have been arrived at by not thinking.

Because they can start a dialog in an important place, one which is the crux of a problem or a symptom or a ramification of it.

Because the difficult questions are often the improving questions: maybe the thing being asked about is changed for the better as a result of the question, or our view of the thing becomes more nuanced or increased in resolution, or broader, or our knowledge about our knowledge of the thing becomes clearer.

And even though the answers are often difficult, I do my best to give them in as full, honest and timely a fashion as I can because I think that an environment where those questions can be asked safely and will be answered respectfully is one that is conducive to good work.

* And we haven't taken into account the questions that aren't asked because they are hard to know or the answers that are hard purely because of the effort that's required to discover them or how differences in context can change how questions are asked or answered, how the same questions can be asked in different ways, willful blindness, plausible deniability, behavioural models such as the Satir Interaction Model and so on.

Thanks to Josh Raine for his comments on an earlier draft of this post.
Image: https://flic.kr/p/6Tnxm9
Categories: Blogs

How to make ANY code in ANY system unit-test-friendly

Rico Mariani's Performance Tidbits - Fri, 11/21/2014 - 00:37

There are lots of pieces of code that are embedded in places that make it very hard to test.  Sometimes these bits are essential to the correct operation of your program and could have complex state machines, timeout conditions, error modes, and who knows what else.  However, unfortunately, they are used in some subtle context such as a complex UI, an asynchronous callback, or other complex system.  This makes it very hard to test them because you might have to induce the appropriate failures in system objects to do so.  As a consequence these systems are often not very well tested, and if you bring up the lack of testing you are not likely to get a positive response.

It doesn’t have to be this way.

I offer below a simple recipe to allow any code, however complex, however awkwardly inserted into a larger system, to be tested for algorithmic correctness with unit tests. 

Step 1:

Take all the code that you want to test and pull it out from the system in which it is being used so that it is in separate source files.  You can build these into a .lib (C/C++) or a .dll (C#/VB/etc.) it doesn’t matter which.  Do this in the simplest way possible and just replace the occurrences of the code in the original context with simple function calls to essentially the same code.  This is just an “extract function” refactor which is always possible.

Step 2:

In the new library code, remove all uses of ambient authority and replace them with a capability that does exactly the same thing.  More specifically, every place you see a call to the operating system replace it with a call to a method on an abstract class that takes the necessary parameters.  If the calls always happen in some fixed patterns you can simplify the interface so that instead of being fully general like the OS it just does the patterns you need with the arguments you need. Simplifying is actually better and will make the next steps easier.

If you don’t want to add virtual function calls you can do the exact same thing with a generic or a template class using the capability as a template parameter.

If it makes sense to do so you can use more than one abstract class or template to group related things together.

Use the existing code to create one implementation of the abstract class that just does the same calls as before.

This step is also a mechanical process and the code should be working just as well as it ever did when you’re done.  And since most systems use only very few OS features in any testable chunk the abstract should stay relatively small.

Step 3:

Take the implementation of the abstract class and pull it out of the new library and back into the original code base.  Now the new library has no dependencies left.  Everything it needs from the outside world is provided to it on a silver platter and it now knows nothing of its context.  Again everything should still work.

Step 4:

Create a unit test that drives the new library by providing a mock version of the abstract class.  You can now fake any OS condition, timeouts, synchronization, file system, network, anything.  Even a system that uses complicated semaphores and/or internal state can be driven to all the hard-to-reach error conditions with relative ease.  You should be able to reach every basic block of the code under test with unit tests.

In future, you can actually repeat these steps using the same “authority free” library merging in as many components as is reasonable so you don’t get a proliferation of testable libraries.

Step 5:

Use your code in the complex environment with confidence!  Enjoy all the extra free time you will have now that you’re more productive and don’t have bizarre bugs to chase in production.

 

Categories: Blogs

A Personal History of Microcomputing (Part 2)

Rico Mariani's Performance Tidbits - Thu, 11/20/2014 - 09:59

I could spend a long time writing about programming the PET and its various entry points, and I’m likely going to spend disproportionate time on the CBM family of computers because that’s what I know, but I think it’s important to look at other aspects of microcomputers as well and so my sojourn into 6502 assembly language will have to be cut short.  And anyway there’s room for programming examples elsewhere.

To make a decent microcomputer you need to solve certain supplemental problems… so this is the Peripherals edition of this mini-history.

Storage

Now here I’m really sad that I can’t talk about Apple II storage systems.  But I can give you a taste of what was possible/normal in 1979.  Tapes.  Tapes my son, lots of tapes.  Short tapes, long tapes, paper tapes, magnetic tapes, and don’t forget masking tape – more on that later.

Many computers (like the KIM) could be connected to a standard cassette player of some kind, the simplest situation just gave you some kind of connector that would provide input and output RCA jacks and you bring your own cassette player.

Paper type was also used in some cases, in those the paper tape insertion would effectively provide the equivalent of keystrokes on some TTY that was connected via say RS232 (and I say that loosely because usually it was just a couple of pins that behaved sorta like RS232 if you crossed your eyes enough).  Likewise paper tape creation could be nothing more than a recording of printed output which was scientifically created so as to be also be valid input!  If that sounds familiar it’s because the same trick was used to provide full screen editing on PET computers – program listings were in the same format as the input and so you could just cursor up there and edit them some and press enter again.

OK, but let’s be more specific.  The PET’s tape drive could give you about 75 bytes/sec, it was really double that but programs were stored twice(!), for safety(!!), which meant that you could fit a program as big as all the available memory in a 32k PET in about 10 minutes of tape.  Naturally that meant that additional tape would just create fast forward nightmares so smaller tapes (and plenty of them) became somewhat popular.  I must have had a few dozen for my favorite programs.   Also backups were good because it got cold in Toronto and magnetic tape was not always as robust as you might like.   Plus you could rewind one with a pencil and it wouldn’t take so long, always a plus.

But the real magic of the PET’s tape was that the motor was computer controlled.  So if you got a big tape with lots of programs on it, it often came with an “index” program at the front.  That program would let you choose from a menu of options.  When you had selected it would instruct you to hit the fast forward button (which would do nothing) and strike a key on the pet.  Hitting the key would then engage the fast forward for just the right amount of time to get you to where the desired program was stored on the tape and the motor would stop!  Amazing!  What a time saver!

The timelines for other manufacturers is astonishingly similar, it seems everyone decided to get into the game in 1977 and things developed very much in parallel in all the ecosystems.  Apple, and Radio Shack were highly harmonious schedules.

But what about disk drives, surely they were a happening thing?  And indeed they were.  On the Commodore side there were smart peripherals like the 2040 and 4040 dual floppy drives.  Now they pretty much had to be that way because there was so little memory to work with that if you had to sacrifice even a few kilobytes to a DOS then you’d be hurting.   But what smarts, here’s what you do when you insert a new floppy

open 1,8,15:  Print #1, “I0”

or you could get one free command in there by doing

open 1,8,15,”I0”

And then use print for new commands.  To load a program by name simply do this:

load “gimme”,8

and then you can run it same as always. 

But how do you see what’s on your disk?  Well that’s easy, the drive can return the directory in the form of a program, which you can then list

load “$0”,8
list

And there you have all your contents.  Of course this just wiped your memory so I hope you saved what you had…

Well, ok, it was a total breakthrough from tape but it was hardly easy to use, and the directory thing was not really very acceptable.  But fortunately it was possible to extend the basic interpreter… sort of.  By happenstance, or maybe because it was slightly faster, the PET used a tiny bit of self-modifying code to read the next byte of input and interpret it.  You could hack that code and make it do something other than just read the next byte.  And so were born language extensions like the DOS helper.   Now you had the power to do this:

>I0

To initialize drive zero, and,

>$0

To print the directory without actually loading it!  Amazing!

/gimme

Could be used instead of the usual load syntax.

From a specs perspective these 300 RPM babies apparently could do about 40 KB/s transfer internally but that slowed down when you considered the normal track-to-track seeking and the transfer over IEEE488 or else the funky serial IEEE488 of the 1541.   I think if you got 8KB/s on parallel you’d be pretty happy.  Each disk stored 170k!

Tapes soon gave way to floppies… and don’t forget to cover the notch with masking tape if you don’t want to accidently destroy something important.  It was so easy to get the parameters backwards in the backup/duplicate command

>D1=0

Mean duplicate drive 1 from drive 0 but it was best remembered Destroy 1 using 0.

Suffice to say there has been a lot of innovation since that time.

Printing

It certainly wasn’t the case that you could get cheap high-quality output from a microcomputer in 1977 but you could get something.  In the CBM world the 2022 and 2023 were usable from even the oldest pet computers and gave you good solid dot matrix quality output.  By which I mean very loud and suitable for making output in triplicate. 

Letter quality printers were much more expensive and typically not in anything like an interface that was “native” to the PET.  I think other ecosystems had it better.  But it didn’t matter, the PET user port plus some software and an adapter cable could be made centronics compatible or a different cable and you could fake RS232 on it. That was enough to open the door to many other printer types.  Some were better than others.  We had this one teletype I’ll never forget that had the temerity to mark its print speeds S/M/F for slow, medium, and fast – with fast being 300 baud.   Generously, it was more like very slow, slow, and medium – or if you ask me excruciatingly slow, very slow, and slow.  But this was pretty typical.

If you wanted high quality output you could get a daisywheel printer, or better yet, get an interface that let you connect a daisywheel typewriter.  That’ll save you some bucks… but ribbons are not cheap. 

They still get you on the ink.

With these kinds of devices you could reasonably produce “letter-quality” output.  But what a microcosm of what’s normal the journey was.  Consider the serial protocol: 7 or 8 bits? parity or no? odd or even?  Baud rate?  You could spend a half hour guessing before you saw anything at all.  But no worries, the same software to talk to a TRS-80 Votrax synthesizer and speak like you’re in Wargames.

Now I call these things printers but you should understand they are not anything like what you see today.  The 2023 for instance could not even advance the page without moving the head all the way from side to side.  Dot matrix printers came out with new features like “bi-directional” meaning they could print going left to right and then right to left so they weren’t wasting time on the return trip.  Or “logic seeking” meaning that the printer head didn’t travel the whole length of the printed line but instead could advance from where it was to where it needed to be on the next line forwards or backwards.   A laser printer it ain’t.

Double-density dot matrix for “near-letter-quality” gave you a pretty polished look.  132 character wide beds were great for nice wide program listings but options were definitely more limited if you were not willing to roll your own interface box.

Still, with a good printer you could do your high school homework in a word processor, and print it in brown ink on beige paper with all your mistakes corrected on screen before you ever wrote a single character.

So much for my Brother Electric.  Thanks anyway mom.

 

Categories: Blogs

Continuous Delivery in a .NET World

Adam Goucher - Quality through Innovation - Wed, 11/19/2014 - 17:05

Here is one the other talk I did at Øredev this year. The original pitch was going to be show a single character commit and walk it through to production. Which is in itself a pretty bold idea for 40 minutes, but… But that pitch was made 7 months ago with the belief we would have Continuous Delivery to production in place. We ended up not hitting that goal though so the talk became more of a experience report around things we (I) learned while doing it. I would guess they are still about a year away from achieving it given what I know about priorities etc.

Below is the video, and then the deck, and the original ‘script’ I wrote for the talk. Which in my usual manner deviated from on stage at pretty much every turn. But, stories were delivered, mistakes confessed to, and lots of hallways conversations generated so I’m calling it a win.

CONTINUOUS DELIVERY IN A .NET WORLD from Øredev Conference on Vimeo.

Continuous Delivery in a .NET World from Adam Goucher

Introduction
I’ll admit to have being off the speaking circuit and such for awhile and the landscape could have changed significantly, but when last I was really paying attention, most, if not all talks about Continuous Delivery focused on the ‘cool’ stack such as Rails, and Node, etc. Without any data to back up this claim at all, I would hazard a guess that there are however more .NET apps out there, especially behind the corporate firewall than those other stacks. Possibly combined. This means that there is a whole lot of people being ignored by the literature. Or at least the ones not being promoted by a tool vendor… This gap needs to be addressed; companies live and die based on these internal applications and there is no reason why they should have crappy process around them just because they are internal.

I’ve been working in a .NET shop for the last 19 months and we’re agonizingly close to having Continuous Delivery into production… but still not quite there yet. Frustrating … but great fodder for a talk about actually doing this in an existing application [‘legacy’] context.

Not surprisingly, the high level bullets are pretty much the same as with other stacks, but there of course variations of the themes that are at play in some cases.

Have a goal
Saying ‘we want to do Continuous Delivery’ is not an achievable business goal. You need to be able to articulate what success looks like. Previously, success as looked like ‘do an update when the CEO is giving an investor pitch’. What is yours?

Get ‘trunk’ deliverable
Could you drop ‘trunk’ [or whatever your version control setup calls it] into production at a moment’s notice? Likely not. While it seems easy, I think this is actually the hardest part about everything? Why? Simple … it takes discipline. And that is hard. Really hard. Especially when the pressure ramps up as people fall back to their training in those situations and if you aren’t training to be disciplined…

So what does disciplined mean to me, right now…

  • feature flags (existence and removal of)
  • externalized configuration
  • non assumption of installation location
  • stop branching!!

Figure out your database
This, I think, is actually the hardest part of a modern application. And is really kinda related to the previous point. You need to be able to deploy your application with, and without, database updates going out. That means…

  • your tooling needs to support that
  • your build chains needs to support that
  • your application needs to support that (forwards and backwards compatible)
  • your process needs to support that

This is not simple. Personally, I love the ‘migration’ approach. Unfortunately… our DBA didn’t.

Convention over Configuration FTW
I’m quite convinced of two things; this is why RoR and friends ‘won’ and why most talks deal with them rather than .NET. To really win at doing Continuous Delivery [or at least without going insane], you need to standardize your projects. The solution file goes here. Images go here. CSS goes here. Yes, the ‘default’ project layout does have some of that stuff already figured out, but it is waaaaay too easy to go of script in the name of ‘configurability’. Stop that! Every single one of our .NET builds is slightly different because of that at 360, which means that we have to spend time when wiring them up and dealing with their snowflake-ness. I should have been able to ‘just’ apply a [TeamCity] template to the job and give it some variables…

Make things small [and modular]
This is something that has started to affect us more and more. And something that doesn’t be default in the RoR community with their prevalence of gems. If something has utility, and is going to be across multiple projects, make it a Nuget package. The first candidate for this could be your logging infrastructure. Then your notifications infrastructure. I have seen so much duplicate code…

Not all flows are created equal
This is a recent realization, though having said that, is a pretty obvious one as well. Not all projects, not all teams, not all applications have the same process for achieving whatever your Continuous Delivery goal is. Build your chains accordingly.

Automate what should be automated
I get accused of splitting hairs for this one, but Continuous Delivery is not about ‘push a button, magic, production!’. It is all about automating what should be automated, and doing by hand what should be done by hand. But! Also being able to short circuit gates when necessary.

It is also about automating the right things with the right tools. Are they meant for .NET or was it an afterthought? Is it a flash in the pan or is it going to be around? Does its project assumptions align with yours?

Infrastructure matters
For Continuous Delivery to really work, and this is why its often mentioned in the same breath as DevOps (we’ll ignore that who problem of ‘if you have devops you aren’t doing devops’…), the management of your infrastructure and environments needs to be fully automated as well. This is very much in the bucket of ‘what should be automated’. Thankfully, the tooling has caught up to Windows so you should be working on this right from the start. Likely in tandem with getting trunk deliverable.

Powershell
But even still, there are going to have to be things that you need to drop down to the shell and do. We made a leap forward towards our goal when we let Octopus start to control IIS. But they don’t expose enough hooks for the particular needs of our application so we have to use the IIS cmdlets to do what we need afterwards. And there is absolutely nothing wrong with this approach.

Its all predicated by people
Lastly, and most importantly, you need to have the right people in place. If you don’t, then it doesn’t matter how well you execute on the above items, you /will/ fail.

Categories: Blogs

Auto-refresh a web page

Yet another bloody blog - Mark Crowther - Tue, 11/18/2014 - 14:36
Today I discovered the main website I keep at www.cyreath.co.uk wasn't live. Shockfest! I say the main site, but it's the main 'static' site, the main site is perhaps the blog over at http://cyreath.blogspot.co.uk now. Either way, the website was (is) down and I'm waiting for support to email it's back up.

The email will be appreciated but just trying to load the site is the best way to know it's there. What I don't want to do is keep hitting F5 though. Thankfully HTML has the reload() method available. Wrapped in a little JavaScript we can use this to poll the site and avoid having to refresh it ourselves.
The script is pretty straight forward and useful for other things. You can plug in that auction site, page with a hit counter, flight status or test results dashboard page showing on a large screen in the office.
Paste the below into your favourite text editor and save it as a .html page.

 <html>  
<head>
<script type="text/JavaScript">
<!--
function pageCheckRefresh(timeoutPeriod) {
setTimeout("location.reload(true);",timeoutPeriod);
}
// -->
</script>
</head>
<!--- change this to the refresh time you want -->
<body onload="JavaScript:pageCheckRefresh(10000);">
<!--- This first url is just a control, one we know WILL be there, so we know this checker is working -->
<p><iframe height="200" width="750" src="http://www.bbc.co.uk"></iframe></p>
<!--- You can have more pages in iframes, just copy the below ( <p> to </p> ) -->
<p><iframe height="300" width="750" src="http://www.cyreath.co.uk"></iframe></p>
</body>
</html>

You'll need to edit two things and add one:
Edit the refreshIt's currently set to 10000, which is 10 seconds.
Edit the target URLChange http://www.cyreath.co.uk to the URL you're interested in.
Add additional URLsCopy and paste the ...
section and add another URL to check multiple pages.
That's it, straight forward but handy.
Mark.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Liked this post?
Say thanks by Following the blog or subscribing to the YouTube Channel!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Categories: Blogs

A Personal History of Microcomputing (Part 1)

Rico Mariani's Performance Tidbits - Tue, 11/18/2014 - 07:29

I started writing this several years ago, never finished it...  stumbled across it just now and I thought maybe if I post this I'd be motivated to write more.

This is of course just my perspective and it's probably wrong in places, but it is my perspective.  So there you go.  Lots of fun memories.  Hope you enjoy. 

[You can also tweet me @ricomariani, cheers.]

A Personal History of Microcomputing

I can’t possibly cover this topic in anything like a detailed way and so you may ask what I’m doing trying to write down this little paltry bit of history.  Well for one thing I’m not going to be able to remember it forever and inasmuch as this is my history I’d like to record it before I forget.  But that’s not usually a good enough reason for me to do anything so I should add that another reason, perhaps the main reason, is that I’m so very, very, tired of hearing other made up histories that forget so much of what happened, even fairly simple things, and that attribute important bits of progress to all the wrong people.
 
So while I can’t write about everything I can write about some things, some things that I saw and even experienced myself.  And I hope that some of those things are interesting to others, and that they remember, too.
 
The first things I remember
 
I’m writing this on 11/1/2012 and I’m about to try to go back in time to the first relevant memory I have of this industry.   I’m fairly sure it was 1975, 5th grade for me, and I picked up a book from our school library that was called “Automatic Data Processing” or maybe it was “Automated Data Processing”.  I checked that book out of the library at least 3 times.  I never read much of it.  I’m not sure what it was doing in a grade school library.  I remember one cool thing about it was that it had a little decoder chart for the unusual symbols written at the bottom of personal checks.  I know that I tried to read it cover to cover but I didn’t have much success.  I guess I shouldn’t be too astonished, I was only 10 years old at the time.
 
The reason I bring this up is that in many ways this was what computer science was like at the time.  It wasn’t exactly brand new but it was perhaps the province of very large businesses and governments and there wasn’t very much that was personal about it, except for those magnetic ink markings on personal checks.
 
I did not know then that at about the same time in 1975 a small company called Microsoft was being founded.  I did not know that Intel had produced some very interesting silicon chips that would herald the first microcomputer.  I don’t think anyone I knew had a Pong game.  I had a Brother personal electric typewriter which was a pretty cool thing to have and that was the closest to word processing that I had ever experienced.  I didn’t use anything like white-out on account of I couldn’t afford it.
 
I was a lot more concerned about the fact that Canada was going to adopt the metric system than I was about any of these things.  The computer technology on Star Trek (which I saw in reruns) and The Six Million Dollar Man seemed equally reasonable to me.  I wasn’t old enough to think Erin Moran of Happy Days (Joannie) was really cute but I soon would be.  That’s 1975.
 
People Start Experiencing Computers
 
If you ever saw an original MITS Altair 8800 you would be really not impressed.  I mean so seriously not impressed that even McKayla Maroney could not adequately capture this level of unimpressedness (but I know someone who could :)).  If I had to pick, in 1975, goofing around with an Altair and vs. playing around with a hand-wound copper armature suspended on a couple of nails to make a motor, the motor would win every time.  And I think more important than that, a few thousand people actually experienced the Altair.  The Altair did not take North America or the world by storm.  In fact you could live your life just fine and not be aware of their existence at all and that is certainly where I was.
 
However, there were lots of things starting to become normal and even common that were foreshadowing the personal computer. 
 
I think I first noticed it happening in watches.  You remember the kind, well there were two kinds really, the first kind was the type where you had to push a button and an LED display would then tell you what time it was?  This was important because of course you couldn’t have the LED display on all the time as it would run down the battery far too quickly.   Which meant glancing at your watch was right out -- you needed that button.   I think there was a commercial where a fellow was fencing and trying to see what time it was and it wasn’t going so good because he had to push the button.  I’m not sure why you would need to know what time it was while fencing but it did make the point dramatically that you had to push a button. 
 
The other type of watch was LCD, and I suppose the fact that you can still get LCD watches today and not so much LED (but they are making a comeback as flashlights) speaks volumes.  These devices had rudimentary features that allowed them to do their jobs.  They were not in any way generally programmable, at least not by end-users.  But groundwork was being laid.  You could do an entire volume on just wearable computers.
 
I only knew one person with an LED watch, but I knew a lot of people that had assorted LED games.  You might want to stop and think about that.  We’re talking about a game here where the primary display is a few 8 segment LED clusters, the same as you found on calculators and such.  These games were very ambitious indeed claiming to be things like football simulations.  An Xbox 360 these are not and Madden Football was many, many, years away.  But somehow the up-and-down dodge-the-barely-moving-bad-guys football experience, punctuated by the crisp sound of what had to be piezo-electric crystal powered sound was pretty impressive.  As was the number of batteries you had to sacrifice to the thing.  Now to be sure I never took one apart and I wouldn’t have known what a piezo-electric speaker was at the time anyway but I’d bet you a nickel those games were powered by a simple microprocessor and some ROM.  They made more of a cultural dent than the Altair.   And they were more accessible than say Pong, which was present but hardly ubiquitous itself.
 
I’ve completely glossed over calculators at this point.  And maybe rightly so; even a four-function-doorstop of a calculator with no “memory” function was sufficiently expensive that you were unlikely to encounter them.

And much was I was down on the Altair, within a few years, another Intel based computing system would become much more popular – Space Invaders.  Which for many of us was the first “non-pong-like game” they ever experienced or were inspired by.

In summary, I think it’s fair to say that at this point, the late seventies, you could still be excused if you had never touched anything like a microcomputer.  But that was likely to change soon.

My First Computers

I was taking an math enrichment program in junior high school and though our junior high didn’t have a computer, there was this HP minicomputer that made the rounds.  I’m not sure what it was exactly but I’ve looked at some pictures and specifications and I’m pretty convinced that it was an HP Model 9830A with the thermal printer and card-reader options.  We mostly fed it cards even though it had a console.  The thing was at our school for an entire two weeks, and we spent the week before it arrived learning flowcharting and BASIC.

I was totally hooked.  I stayed late at school every day the thing was there and then kept writing programs I couldn’t even run anywhere on paper pads the whole rest of the year.  So in the 9th grade I made a major right turn career wise and I signed up for computer science classes in high school which I otherwise likely would not have done.

As I started 10th grade in the fall of 1979, I landed in a classroom that had three, count’em, Commodore PET microcomputers and one Radio-Shack TRS80.  I’m not sure why the “Trash-80” was unpopular, it’s actually a formidable device in own right but for reasons lost to history the PETs were what everyone really used.   I knew this was going to be a cool class when I walked in because I’d seen a PET on “The Price Is Right” so it had to be awesome.  I still remember my first day looking at one of those things, the teacher had it up front and was reviewing some basic commands and I was hypnotized by the flashing cursor. 

I worked on those computers at great length so I can tell you something about them and maybe something about the software eco-system they had.  The “PET 2001” was powered by a 6502 processor and had 8k of RAM (with famously 7167 bytes free for you at startup), and 8k of ROM for BASIC and I/O support.  Plus another 1k of RAM for video memory.  The IO system was not especially complicated, like most of the era it was just memory mapped IO and it was enough to read in a keyboard, and talk to the built-in cassette tape drive.  There was support for IEEE488 in the initial version but it didn’t work due to bugs, except for this one printer which included built in workarounds for the bugs.   IEEE488 “actually worked” in the 2001N series.

However, even on that original 8k PET you could do some pretty cool things.  The ROM included built in BASIC and so there were a variety of games and it was not that hard to make your own, and we did.  I was perennially working on a version of Space Invaders and then Asteroids.  It was always mostly working.  There were dozens of race track style games, some even first person.  There was a cool monthly digital magazine, “CURSOR” that had something new and slick every issue.  I remember hacking on the model rail simulator with special zeal.  There were decent chess programs, and even more, less decent chess programs available if you were willing to type them in yourself.

But what about practicality?  Even that computer, such as it was, could do word processing for you.  By the time I started using them, WordPro3 was already available.  I think they even had features that allowed you to print while you were still making edits!  Amazing!  You could insert new lines where you pleased and even copy text from one place to another without requiring you to travel forward in time to the Macintosh era.  In fact every microcomputer worth mentioning, with anything like a general purpose display, could do these basic functions.  They certainly were not peculiar to the PET.

If you wanted high quality sound, why, naturally you would attach a breadboard with about a dozen suitably sized resistors and an OP-AMP to your parallel port and then you could mix 4 sources and enjoy high quality 8-bit digital to analog sound playback.  Your experience is limited only by the quality of your resistors!  Naturally your playback program included 6 bit precision wave tables for sine waves that you could sample/mix to get your four voices because none were built in.  For bonus points add an FM modulator to your circuit and you could listen to it on your FM-radio at the frequency of your choice instead of attaching a speaker.   Of course stereo wasn’t possible on account of there weren’t enough output pins on the parallel port for 16 bits.

Of course if you wanted to just hear some variable pitch “beeping” and make music like that, that was easier.  You could just crank up the shift rate on the output port designed to be part of a UART (the CB2) and vary the shift rate according to the music.  The preferred way to hear that was to attach an alligator clip to the port with electric tape on the bottom teeth so as to not short it out (because the signal was on top and connectors were far too expensive) and then connect that to a suitable speaker.  This technique was popular in games because it didn’t tie up the processor shifting out waveforms.

My electronics teacher had an even simpler 6502 computer system that became the first thing I ever brought home.  The KIM-1 came with an impressive array of books on the 6502 architecture, which I was especially excited about because I wanted to learn to program the PET in Machine Language (the capitals were evident when we said it) and of course they had the same microprocessor.   But the really cool thing was Jim Butterfield’s “The First Book of KIM” which was simply outstanding in terms of having cool little programs that did something and taught you something. 

The KIM had a display that consisted of six 7-segment LEDs.  That’s it.  Enough to show the address and contents of a single byte of memory in hexadecimal.   On that display you could play a little pong type game, Hunt the Wumpus, navigate a star field, simulate a lunar landing, and more… if you were willing to enter the programs with the little calculator tablet.  With only 1k of memory you could be on a first-name basis with every byte of your program and indeed you pretty much had to be.  But then, that was the point.  And the KIM’s exposed guts encouraged even more hardware hacking than the PET did, so we soon had interesting keyboards attached and more. 

Still, I don’t recall ever doing anything especially practical on the device, it was a lot of elaborate playing around. It was an excellent training area and I suppose that’s what it was designed for more than anything else so I shouldn’t be surprised. 

The 6502 training was useful and soon I was squeezing programs into the spare cassette buffer of the PET like a champ.  Hybrid BASIC and assembly language programs were pretty common, whereas full assembly language programs often had nothing more than an enigmatic listing

10 SYS(1039)

The hybrids often had little SYS 826 and friends sprinkled in there.  So while a working knowledge of machine language helped you to understand a few more snippets of PETTREK, really the more interesting thing you could do with it is learn a lot more about how your computer worked.

Remember the PET had only 8k of ROM, which was actually a lot compared to its cousins, but still not so much that you couldn’t disassemble every last byte and then start pretending to be the CPU starting from the reset vector.   From there it wasn’t too long until you had figured that JSR $FFD2 was used to write a character and even why that worked.  Those ROMs were full of great techniques…

 

 

Categories: Blogs

Load Testing with Visual Studio Online

Testing TV - Mon, 11/17/2014 - 23:34
Most development teams realize that they should do load testing but can’t because of time or resource constraints. ?Now, with Visual Studio Online, load testing has never been easier. We’ve introduced a simplified, browser-based authoring and configuration experience that lets you quickly create a load test and execute it at scale, using the power of […]
Categories: Blogs

A Tech Lead Paradox: Consistency vs Improvement

thekua.com@work - Mon, 11/17/2014 - 13:32

Agile Manifesto signatory Jim Highsmith talks about riding paradoxes in his approach to Adaptive Leadership.

A leader will find themselves choosing between two solutions or two situations that compete against each other. A leader successfully “rides the paradox” when they adopt an “AND” mindset, instead of an “OR” mindset. Instead of choosing one solution over another, they find a way to satisfy both situations, even though they contradict one another.

A common Tech Lead paradox is the case of Consistency versus Improvement.

The case for consistency

Code is easier to understand, maintain and modify when it is consistent. It is so important, that there is a wiki page on the topic and the 1999 classic programming book, The Pragmatic Programmer: From Journeyman to Master had a chapter titled, “The Evils of Duplication.” Martin Fowler wrote about similar code smells, calling them “Divergent Change” and “Shotgun Surgery” in his Refactoring book.

Consistency ultimately helps other developers (or even your future-self) change code with less mental burden figuring out of there will be unwanted side-effects.

The case for improvement

Many developers want to use the latest and greatest tool, framework or programming language. Some examples: Java instead of C/C++, Python/Ruby instead of Java, JavaScript (Node) instead of Python/Ruby and then Clojure in place of JavaScript. The newest and latest technologies promise increased productivity, fewer bugs and more effective software development. Something that we all want. They promise the ability to accomplish something with fewer lines of code, or a simpler, clearer way to write something.

The conflict

Software is meant to be soft. Software is meant to be changed. A successful codebase will evolve over time, but the more features and changes a codebase has, the harder it becomes to add something new without making the codebase inconsistent. When a new technology is added to the mix, there is suddenly two ways of accomplishing the same thing. Multiple this over time and number of transitions, and a codebase suddenly has eight different ways of accomplishing

Transitioning everything to a new technology is a function takes time. Making a change to an old part of the system is a gamble. Leaving the codebase as it is makes potentially new change in this area hard. That new change may never happen. Migrating everything over has the risk of introducing unwanted side-effects and taking time that may never be worth it.

To the developer wanting the new technology, the change appears easy. To those who have to follow up with change (i.e. other team members or future team members) it may not be so clear. Making it consistent takes time away from developing functionality. Business stakeholders want (understandably) justification.

Phil Calçado (@pcalcado) tweeted about this paradox:
As a dev, I love going for the shiny language. As a manager, I want a mature ecosystem and heaps of bibliography on how to write decent apps

What does a Tech Lead do?

Tech Leads ride the paradox by encouraging improvement and continually seeking consistency. But how? Below I provide you with a number of possible solutions.

Use Spike Solutions

Spikes are a time-boxed XP activity to provide an answer to a simple question. Tech Leads can encourage spike solutions to explore whether or not a new technology provides the foreseeable benefit.

Improvement spikes are usually written stand-alone – either in a branch or on a separate codebase. They are written with the goal of learning something as fast as possible, without worrying about writing maintainable code. When the spike is over, the spike solution should be thrown away.

Spikes provide many benefits over discussion because a prototype better demonstrates the benefits and problems given a particular codebase and problem domain. The spike solution provides a cheap way to experiment before committing to a particular direction.

Build a shared roadmap

Improvements are easy to make to a small, young codebase. Everything is easily refactored to design a new tool/technology. It’s the larger longer-lived codebases that are more difficult to change because more has been built up on the foundations that must be changed.

A Tech Lead establishes a shared understanding with the team of what “good” looks like. Specifically, which tool/technology should be used for new changes. They keep track of older instances, looking to transition them across where possible (and where it makes sense).

Techniques like the Mikado Method are indispensable for tackling problems that eating away at the bigger problem.

Playback the history

A new developer sees five different ways of doing the same thing. What do they do? A Tech Lead pre-empts this problem by recounting the story of how change was introduced, what was tried when and what the current preferred way of doing things are.

Ideally the Tech Lead avoids having five different ways of accomplishing the same thing, but when not possible, they provide a clear way ahead.

If you liked this article, you will be interested in “Talking with Tech Leads,” a book that shares real life experiences from over 35 Tech Leads around the world. Now available on Leanpub.

Categories: Blogs

How to remain relevant – price change

The Social Tester - Mon, 11/17/2014 - 13:00

How to remain relevant Just a short post to let you know that my book, Remaining Relevant – testers edition, will be going up in price on 30th November 2014. The price is changing to make way for a new book launch next year and also a non-testing edition of Remaining Relevant also coming out … Read More →

The post How to remain relevant – price change appeared first on The Social Tester.

Categories: Blogs

How to get the most out of impact mapping

Gojko Adzic - Mon, 11/17/2014 - 12:35

im-contexts Ingrid Domingues, Johan Berndtsson and I met up in July this year to compare the various approaches to Impact Mapping and community feedback and investigate how to get the most out of this method in different contexts. The conclusion was that there are two key factors to consider for software delivery using impact maps, and recognising the right context is crucial to get the most out of the method. The two important dimensions are the consequences of being wrong (making the the wrong product management decisions) and the ability to make investments.

These two factors create four different contexts, and choosing the right approach is crucial in order to get the most out of the method:

  • Good ability to make investments, and small consequences of being wrong – Iterate: Organisations will benefit from taking some initial time defining the desired impact, and then exploring different solutions with small and directed impact maps that help design and evaluate deliverables against desired outcome.
  • Poor ability to decide on investments, small consequences of being wrong – Align: Organisations will benefit from detailing the user needs analysis in order to make more directed decisions, and to drive prioritisation for longer pieces of work. Usually only parts of maps end up being delivered.
  • Good ability to make investments, serious consequences of being wrong – Experiment: Organisations can explore different product options and user needs in multiple impact maps.
  • Poor ability to make investments, serious consequences of being wrong – Discover: The initial hypothesis impact map is detailed by user studies and user testing that converge towards the desired impact.

We wrote an article about this. You can read it on InfoQ.

Categories: Blogs

Agile Education Engine - Bringing Power to your Agile Transformation

When we look across the numerous Agile deployment efforts, we see a lower rate of success that we expect.  May I suggest that in order to improve the odds of achieving Agile success and gaining the business results it can bring, there are three success factors. The first is that the Agile change must be thought of a organizational level change.  The second is that the Agile change must focus on getting the mind Agile-ready.  This is emphasized in the article Are you Ready for you Agile Journey.  And the third is that in order to bridge the gap between the Agile values and principles to Agile methods and practices, people need to be well educated in ways to build more customer value, optimize the flow of work, and increase quality with feedback loops.    
The current level of Agile education tends to be limited to 2-days of training and a variety of books.  I think most people will acknowledge that 2 days of Agile training does not provide enough learning.  On the other hand, reading lots of books takes a lot of time and are often not aligned with each other.  The other challenge with some of the Agile education is that it is often focused on implementing the mechanics.
What is missing from many Agile transformations is an Agile Education Engine that helps you truly understand and embrace Agile and helps bridge the gap between Agile Values and Principles and the Agile methods and practices.  This will help folks better understand how to understand, embrace, and implement Agile and move beyond simply following the Agile mechanics of the methods and practices.  The Agile Education Engine can help ready the mind for an effective transformation.  


One of the best Agile education engines that I have seen is the material found in the Value Flow Quality(VFQ) work-based education system.  VFQ provides a set of well-researched topics that are easily digested in the form of readings, case studies, activities, and experiments.  It provides students with the ability to study a little, then experiment a little within their own context (aka, project or team).  The benefit of the VFQ work-based learning system is that it helps people apply their newly learned skills on the job when they need them.  This bodes very well for the learners because, they can learn at their own pace and as they are trying to implement the Agile values, principles, and practices. 
Some topics that VFQ offers are: Why Change, Optimizing Flow, Feedback, Requirements, Priorization, Trade-offs, Understanding your Customer, Delivering Early and Often, Teams, Motivation, Attacking your Queues, Work in Progress, Trade-offs, and more.  Each topic includes a number of techniques that will help you achieve the business outcomes you are looking for.  The VFQ materials will provide you with knowledge on Value Stream Mapping, Story Mapping, Cost of Delay, 6 Prisms and much more.   
In addition, VFQ really helps you get to the state of “being Agile”.  It moves you away from thinking about the mechanics.  Instead, it provides you a layer of knowledge to ensure you apply the principles and behaviors to the practices to gain the outcomes that you want.  Finally, applying the VFQ education is also an excellent way to kick-start an Agile transformation.  This way, the Agile champions for the transformation and teams are armed with a variety of different ways to bring Agile to the organization. 
If you find yourself struggling with getting a good baseline of Agile education, then consider the Value Flow Quality (VFQ) work-based learning system.  It will help you bridge the gap between the Agile values and principles and the mechanics of many of the Agile methods so that you bring the Agile mindset to bear as you start or continue your Agile journey. 
Categories: Blogs

Inspiration from Leonardo da Vinci

James Grenning’s Blog - Sun, 11/16/2014 - 09:12

While in Singapore, we visited the Leonardo da Vinci exhibit at the Marina Bay ArtScience Museum. We took the guided tour with expert from Milan. We think he was a Catholic priest by the roman collar. We were both amazed at the great influence and knowledge da Vinci had in math, science, art, music, technology, weapons, architecture…

He started with a disadvantage; he spoke no Latin, the language of the educated. His accomplishments would have been amazing if it were 10 people rather than just one. You better go read more here about him.

Leonardo da Vinci believed in experience and observation over dogma. In this we may be able to call him one of the great-great*-grandparents of Agile, Extreme Programming, and Deming’s PDCA.

Leonardo was a religious man, and he believed that God speaks to man through math. He saw the Golden Ratio as the mark of God. Many of you know the golden ratio through the Fibonacci sequence used in Planning Poker. I don’t think that God’s hand is part of software estimation in any form. That’s all on us. It is fascinating how it appears throughout nature and that the polymath Leonardo saw it so well. For you math types, here is an interesting description of the relationship between Fibonacci numbers and the golden ration.

Another cool fact about da Vinci, his written works are all in mirror writing. You have to look in a mirror to read them, or turn your brain inside out. There are two main reasons the scholars surmise. The popular reason is that he was trying to hide his work. Our guide’s more informed reason is that he was left handed. In those days, writing was done with a quill and ink. A left handed person would make a pretty big mess of that. There is no mess, just precision. These drawings are quite small with very fine detail. Mirror writing is a rare skill it seems.

He actually did not reveal all his work, but not necessarily out of secrecy; he just kept all his notes and did not try to publish them. It sounded like he was somewhat of an introvert.

Leonardo taught drawing. He had his students use a silver point, and duplicate his drawings. The silver point could not be erased. I guess that would have led to the artists being very careful. Copying Leonardo’s work meant they had to learn to draw as he did. Not until they mastered drawing with the silver point were they allowed to use paint. He focused on craftsmanship and growing knowledge and skill.

Here are a few more photos

This is a picture of some of Leonardo’s geometry, witnessing the symmetry and attempting to find a way to square a circle.

That is a picture of the enlarged image that covered a wall. We also could see the original in a dark and frigid room. The room conditions were to protect the 500 year old documents. The original was about 6″ wide. His writing and drawing was precise and tiny in all these works.

da Vinci had drawings for the construction of a water lift. Here is the model build from it. His drawings had detailed measurements on dimension and construction.

It kind of looks like a modern day grain elevator.

We also saw the original of this drawing. Right click this one and open it on another page to see the detail.

Again, the precision and measurements allow physical models to be built.

The guide told us that if Leonardo had titanium, he could have built these so they could fly. Leonardo knew that men did not have the strength to actually fly with these wings. It is important to know limits, but he did not seem to be constrained by any conventional wisdom that went against what he could witness.

da Vinci used his knowledge and skill from one field and applied them to others. We can all learn and be inspired from him.

Seeing what da Vinci accomplished and hearing his approach, reinforces that we need more of this in software development. Learning from observation, mastering your craft, preferring observation over dogma, experimenting, and learning from failure should be valued more and sought after. Let’s take the lesson to Scrum users. The cycle of Scrum is essentially an observe and improve cycle. Are you cycling, but not bothering to observe and approve, concerning yourself only with the dogma of Scrum? What would Leonardo’s approach software be?

You can find some more of my photos here.

Categories: Blogs

Thoughts on the Consulting Profession

Sometimes I come across something that makes me realize I am the "anti" version of what I am seeing or hearing.

Recently, I saw a Facebook ad for a person's consulting course that promised high income quickly with no effort on the part of the "consultant" to actually do the work. "Everything is outsourced," he goes on to say. In his videos he shows all of his expensive collections, which include both a Ferrari and a Porsche. I'm thinking "Really?"

I'm not faulting his success or his income, but I do have a problem with the promotion of the concept that one can truly call themselves a consultant or an expert in something without actually doing the work involved. His high income is based on the markup of other people's subcontracting rates because they are the ones with the actual talent. Apparently, they just don't think they are worth what they are being billed for in the marketplace.

It does sound enticing and all, but I have learned over the years that my clients want to work with me, not someone I just contract with. I would like to have the "Four Hour Workweek", but that's just not the world I live in.

Nothing wrong with subcontracting, either. I sometimes team with other highly qualified and experienced consultants to help me on engagements where the scope is large. But I'm still heavily involved on the project.

I think of people like Gerry Weinberg or Alan Weiss who are master consultants and get their hands dirty in helping solve their client's problems. I mentioned in our webinar yesterday that I was fortunate to have read Weinberg's "Secrets of Consulting" way back in 1990 when I was first starting out on my own in software testing consulting. That book is rich in practical wisdom, as are Weiss' books. (Weiss also promotes the high income potential of consulting, but it is based on the value he personally brings to his clients.)

Without tooting my own horn too loudly, I just want to state for the record that I am a software quality and testing practitioner in my consulting and training practice. That establishes credibility with my clients and students. I do not get consulting work, only to then farm it out to sub-contractors. I don't consider that as true consulting.

True consulting is strategic and high-value. My goal is to do the work, then equip my clients to carry on - not to be around forever, as is the practice of some consulting firms. However, I'm always available to support my clients personally when they need ongoing help.

Yes, I still write test plans, work with test tools, lead teams and other detailed work so I can stay sharp technically. However, that is only one dimension of the consulting game - being able to consult and advise others because you have done it before yourself (and it wasn't all done 20 years ago).

Scott Adams, the creator of the Dilbert comic strip had a heyday with poking fun at consultants. His humor had a lot of truth in it, as did the movie "Office Space."

My point?

When choosing a consultant, look for 1) experience and knowledge in your specific area of problems (or opportunities), 2) the work ethic to actually spend time on your specific concerns, and 3) integrity and trust. All three need to be in place or you will be under-served.

Rant over and thanks for reading! I would love to hear your comments.

Randy


Categories: Blogs

Temporarily ignore SSL certificate problem in Git under Windows

Decaying Code - Maxime Rouiller - Sat, 11/15/2014 - 04:43

So I've encountered the following issue:

fatal: unable to access 'https://myurl/myproject.git/': SSL certificate problem: unable to get local issuer certificate

Basically, we're working on a local Git Stash project and the certificates changed. While they were working to fix the issues, we had to keep working.

So I know that the server is not compromised (I talked to IT). How do I say "ignore it please"? Temporary solution

This is because you know they are going to fix it.

PowerShell code:

$env:GIT_SSL_NO_VERIFY = "true"

CMD code:

SET GIT_SSL_NO_VERIFY=true

This will get you up and running as long as you don’t close the command window. This variable will be reset to nothing as soon as you close it. Permanent solution

Fix your certificates. Oh… you mean it’s self signed and you will forever use that one? Install it on all machines.

Seriously. I won’t show you how to permanently ignore certificates. Fix your certificate situation because trusting ALL certificates without caring if they are valid or not is juts plain dangerous.

Fix it.

NOW.

Categories: Blogs

The Yoda Condition

Decaying Code - Maxime Rouiller - Sat, 11/15/2014 - 04:43

So this will be a short post. I would like to introduce a word in my vocabulary and yours too if it didn't already exist.

First I would like to credit Nathan Smith for teaching me that word this morning. First, the tweet:

Chuckling at "disallowYodaConditions" in JSCS… https://t.co/unhgFdMCrh — Awesome way of describing it. pic.twitter.com/KDPxpdB3UE

— Nathan Smith (@nathansmith) November 12, 2014

So... this made me chuckle.

What is the Yoda Condition?

The Yoda Condition can be summarized into "inverting the parameters compared in a conditional".

Let's say I have this code:

string sky = "blue";if(sky == "blue) {    // do something}

It can be read easily as "If the sky is blue". Now let's put some Yoda into it!

Our code becomes :

string sky = "blue";	if("blue" == sky){    // do something}

Now our code read as "If blue is the sky". And that's why we call it Yoda condition.

Why would I do that?

First, if you're missing an "=" in your code, it will fail at compile time since you can't assign a variable to a literal string. It can also avoid certain null reference error.

What's the cost of doing this then?

Beside getting on the nerves of all the programmers in your team? You reduce the readability of your code by a huge factor.

Each developer on your team will hit a snag on every if since they will have to learn how to speak "Yoda" with your code.

So what should I do?

Avoid it. At all cost. Readability is the most important thing in your code. To be honest, you're not going to be the only guy/girl maintaining that app for years to come. Make it easy for the maintainer and remove that Yoda talk.

The problem this kind of code solve isn't worth the readability you are losing.

Categories: Blogs

Do you have your own Batman Utility Belt?

Decaying Code - Maxime Rouiller - Sat, 11/15/2014 - 04:43
Just like most of us on any project, you (yes you!) as a developer must have done the same thing over and over again. I'm not talking about coding a controller or accessing the database.

Let's check out some concrete examples shall we?

  • Have you ever setup HTTP Caching properly, created a class for your project and call it done?
  • What about creating a proper Web.config to configure static asset caching?
  • And what about creating a MediaTypeFormatter for handling CSV or some other custom type?
  • What about that BaseController that you rebuild from project to project?
  • And those extension methods that you use ALL the time but rebuild for each projects...

If you answered yes to any of those questions... you are in great risk of having to code those again.

Hell... maybe someone already built them out there. But more often than not, they will be packed with other classes that you are not using. However, most of those projects are open source and will allow you to build your own Batman utility belt!

So once you see that you do something often, start building your utility belt! Grab those open source classes left and right (make sure to follow the licenses!) and start building your own class library.

NuGet

Once you have a good collection that is properly separated in a project and that you seem ready to kick some monkey ass, the only way to go is to use NuGet to pack it together!

Checkout the reference to make sure that you do things properly.

NuGet - Publishing

OK you got a steamy new hot NuGet package that you are ready to use? You can either push it to the main repository if your intention is to share it with the world.

If you are not ready quite yet, there are multiple way to use a NuGet package internally in your company. The easiest? Just create a Share on a server and add it to your package source! As simple as that!

Now just make sure to increment your version number on each release by using the SemVer convention.

Reap the profit

OK, no... not really. You probably won't be money anytime soon with this library. At least not in real money. Where you will gain however is when you are asked to do one of those boring task yet over again in another project or at another client.

The only thing you'll do is import your magic package, use it and boom. This task that they planned would take a whole day? Got finished in minutes.

As you build up your toolkit, more and more task will become easier to accomplish.

The only thing left to consider is what NOT to put in your toolkit.

Last minute warning

If you have an employer, make sure that your contract allows you to reuse code. Some contracts allows you to do that but double check with your employer.

If you are a company, make sure not to bill your client for the time spent building your tool or he might have the right to claim them as his own since you billed him for it.

In case of doubt, double check with a lawyer!

Categories: Blogs

Software Developer Computer Minimum Requirements October 2014

Decaying Code - Maxime Rouiller - Sat, 11/15/2014 - 04:43

I know that Scott Hanselman and Jeff Atwood have already done something similar.

Today, I'm bringing you the minimum specs that are required to do software development on a Windows Machine.

P.S.: If you are building your own desktop, I recommend PCPartPicker.

ProcessorRecommendation

Intel: Intel Core i7-4790K

AMD: AMD FX-9590

Unless you use a lot of software that supports multi-threading, a simple 4 core here will work out for most needs.

MemoryRecommendation

Minimum 8GB. 16GB is better.

My minimum requirement here is 8GB. I run a database engine and Visual Studio. SQL Server can easily take 2Gb with some big queries. If you have extensions installed for Visual Studio, it will quickly raise to 1GB of usage per instance and finally... Chrome. With multiple extensions and multiple pages running... you will quickly reach 4GB.

So get 8GB as the bare minimum. If you are running Virtual Machines, get 16GB. It won't be too much. There's no such thing as too much RAM when doing software development.

Hard-driveRecommendation

512 GB SSD drive

I can't recommend enough an SSD. Most tools that you use on a development machine will require a lot of I/O. Especially random read. When a compiler starts and retrieve all your source code to compile, it will need to read from all those file. Same thing if you have tooling like ReSharper or CodeRush. I/O speed is crucial. This requirement is even more important on a laptop. Traditionally, PC maker put a 5200RPM HDD on a laptop to reduce power usage. However, 5200 RPM while doing development will be felt everywhere.

Get an SSD.

If you need bigger storage (terabytes), you can always get a second hard-drive of the HDD type instead. Slower but capacities are also higher. On most laptop, you will need external storage for this hard drive so make sure it is USB3 compatible.

Graphic Card

Unless you do graphic rendering or are working with graphic tools that require a beast of a card... this is where you will put the less amount of money.

Make sure to get enough of them for your amount of monitors and that they can provide the right resolution/refresh rate.

Monitors

My minimum requirement nowadays is 22 inches. 4K is nice but is not part of the "minimum" requirement. I enjoy a 1920x1080 resolution. If you are buying them for someone else, make sure they can be rotated. Some developers like to have a vertical screen when reading code.

To Laptop or not to Laptop

Some company go Laptop for everyone. Personally, if the development machine never need to be taken out of the building, you can go desktop. You will save a bit on all the required accessories (docking port, wireless mouse, extra charger, etc.).

My personal scenario takes me to clients all over the city as well as doing presentations left and right. Laptop it is for me.

Categories: Blogs

SVG are now supported everywhere, or almost

Decaying Code - Maxime Rouiller - Sat, 11/15/2014 - 04:43

I remember that when I wanted to draw some graphs on a web page, I would normally have 2 solutions

Solution 1 was to have an IMG tag that linked to a server component that would render an image based on some data. Solution 2 was to do Adobe Flash or maybe even some Silverlight.

Problem with Solution 1

The main problem is that it is not interactive. You have an image and there is no way to do drilldown or do anything with it. So unless your content was simple and didn't need any kind of interaction or simply was headed for printing... this solution just wouldn't do.

Problem with Solution 2

While you now get all the interactivity and the beauty of a nice Flash animation and plugin... you lost the benefits of the first solution too. Can't print it if you need it and over that... it required a plugin.

For OSX back in 2009, plugins were the leading cause of browser crash and there is nothing that stops us from believing that similar things aren't true for other browsers.

The second problem is security. A plugin is just another attack vector on your browser and requiring a plugin to display nice graphs seem a bit extreme.

The Solution

The solution is relatively simple. We need a system that allows us to draw lines, curves and what not based on coordinate that we provide it.

That system should of course support colors, font and all the basic HTML features that we know now (including events).

Then came SVG

SVG has been the main specification to drawing anything vector related in a browser since 1999. Even though the specification started at the same time than IE5, it wasn't supported in Internet Explorer until IE9 (12 years later).

The support for SVG is now in all major browsers from Internet Explorer to FireFox and even in your phone.

Chances are that every computer you are using today can render SVG inside your browser.

So what?

SVG as a general rule is under used or thought of something only artists do or that it's too complicated to do.

My recommendation is to start cracking today on using libraries that leverage SVG. By leveraging them, you are setting yourself apart from others and can start offering real business value to your clients right now that others won't be able to.

SVG has been available on all browsers for a while now. It's time we start using it.

Browsers that do not support SVG
  • Internet Explorer 8 and lower
  • Old Android device (2.3 and less), partial support for 3-4.3
References, libraries and others
Categories: Blogs

Microsoft, Open Source and The Big Ship

Decaying Code - Maxime Rouiller - Sat, 11/15/2014 - 04:43


I would like to note that this post takes only public information available and are not based on my status as Microsoft MVP. I did not interview anyone at Microsoft for those answers. I did not receive any privileged information for writing this post. All the information I am using and the insight therefor are based on publicly available information.

When it happened

I'm not sure exactly when this change toward open source happened. Microsoft is a big ship. Once you start steering, it takes a while before you can feel the boat turn. I think it happened around 2008 when they started including jQuery in the default templates. It was the first swing of the wheel. Back then, you could have confused it for just another side project. Today, I think it was a sign of change.

Before this subtle change, we had things like Microsoft Ajax, the Ajax Control Toolkit and so many other reinvention from Microsoft. The same comment came back every time:

Why aren't you using <INSERT FRAMEWORK HERE> instead of reinventing the wheel?

Open source in the Microsoft world

Over 10 years ago, Microsoft wasn't doing open source. In fact, nothing I remember was open sourced. Free? Yes. Open source? No. The mindset of those days has changed.

The Changes

Initiatives like NuGetintegrating jQuery into Visual Studio templates, the multiple GitHub accounts and even going as to replace the default JSON serializer byJSON.NET instead of writing its own are all proofs that Microsoft have changed and is continuing to change.

It's important to take into account that this is not just lip service here. We're talking real time and money investment to publish tools, languages and frameworks into the open. Projects like Katana and Entity Framework are even open to contribution by anyone.

Without letting slip that Roslyn (the new C#/VB.NET compiler) as well as the F#'s compiler are now open sourced.

This is huge and people should know.

Where is it going today

I'm not sure where it's going today. Like I said, it's a big ship. From what I see, Microsoft is going 120% on Azure. Of course, Windows and Office is still there but... we already see that it's not an Open-Source vs Windows war anymore. The focus has changed.

Open source is being used to enrich Microsoft's environment now. Tools likeSideWaffle are being created by Microsoft employees like Sayed Hashimi and Mads Kristensen.

When I see a guy like Satya Nadella (CEO) talk about open source, I think it is inspiring. Microsoft is going open source internally then encouraging all employees to participate in open source projects.

Microsoft has gone through a culture change, and it's still happening today.

Comparing Microsoft circa 2001 to Microsoft 2014.

If you were at least 10 years in the field, you would remember that way back then, Microsoft didn't do open source. At all.

Compare it to what you've read about Microsoft now. It's been years of change since then and it's only the beginning. Back then, I wouldn't have believed anyone telling me that Microsoft would invest in Open Source.

Today? I'm grinning so much that my teeth are dry.

Categories: Blogs

List of d3.js library for charting, graphs and maps

Decaying Code - Maxime Rouiller - Sat, 11/15/2014 - 04:43

So I’ve been trying different kind of library that are based on d3.js. Most of them are awesome and … I know I’m going to forget some of them. So I decided to build a list and try to arrange them by categories.

Charts
  • DimpleJS – Easy API, lots of different type of graphs, easy to use
  • C3.js – Closer to the data than dimple but also a bit more powerful
  • NVD3.js – Similar to Dimple, require a CSS for proper usage
  • Epoch – Seems to be more focused on real-time graphs
  • Dygraphs – Focus on huge dataset
  • Rickshaw – Lots of easy chart to choose from. Used by Shutterstock
Graphs

Since I haven’t had the chance to try them out, I won’t be able to provide more detailed comments about them. If you want me to update my post, hit me up on Twitter @MaximRouiller.

Data Visualization Editor
  • Raw – Focus on bringing data from spreadsheets online by simply copy/pasting it.
  • Tributary – Not simply focused on graphics, allows you to edit numbers, colors and such with a user friendly interface.
Geographical maps
  • DataMaps – Not a library per say but a set of examples that you can copy/paste and edit to match what you want.
Categories: Blogs