Tuesday, March 27, 2012

Redesigning the keyboard for learnability

 

A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away.

Antoine de Saint Exupéry

Even in the face of the tablet revolution, the keyboard is not going away anytime soon. You can do without it when consuming (browsing, buying etc.). But when producing (the stuff you do at work), you need the keyboard. Well, at least I do.

 

The keyboard is a dumping ground

My Thinkpad has these modifier keys: Shift, Caps lock, Ctrl, Alt, Alt Gr, Windows, Fn. They are a sign of a disease: Layering more and more functions on top of the same QWERTY keys. When 7 modifier keys are not enough, what’s the solution? To combine modifier keys! Having to memorize unnatural combinations like Ctrl+Alt+[letter] is not uncommon in large applications.

When each key has  many different functions – each assigned differently in various applications – it becomes a burden for the user to remember. Once you learn the key combinations, they are quick to invoke. But it takes time to learn combinations. That time is wasted because keyboard commands are hard to discover. From looking at the keys, there’s no obvious way of telling which command will be invoked. Giant posters or physical keyboard overlays go some of the way. All the while, they are still only lipstick on the pig.

 

There are several other problems with the keyboard as we know it

First of, many keys are there for historical reasons. What sense does Print Scr make in 2011? Why do we need both Pause/Break and Esc? Many of those were invented before pc’s became media players and before the internet.

Secondly, it’s well known that the current QWERTY layout was designed specifically to slow down typing in order to prevent the mechanical arms from jamming. Isn’t it time to unleash the potential of a layout that allows us to type as fast as we can?

What if we could redesign the keyboard to accommodate new pc users who may be overwhelmed by the complexity? What if we could ease the burden on the daily user’s memory? Let’s give it a go and start “taking away”.

To recap, what are the problems we want to fix?

  1. Drastically simplify the overall keyboard layout by minimizing the number of keys.
  2. Enhance discoverability and learnability of commands.
  3. Optimize for fast input.

 

1. Simplify the layout

Here’s the key layout of my Lenovo Thinkpad:

Mute VolDn VolUp MuteMic Thinkvantage OnOff

Esc F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 PrtScr ScrLk Pause Ins Del Home End PgUp PgDn

½ 1 2 3 4 5 6 7 8 9 0 + ´ Backspace

Tab Q W E R T Y U I O P Å ¨ Return

CapsLock A S D F G H J K L Æ Ø *

Shift < Z X C V B N M , . – Shift

Fn Ctrl Win Alt Space AltGr Menu Ctrl Up Left Down Right

LeftMouse MiddleMouse RightMouse

LeftMouse RightMouse

Let’s start by getting rid of keys instead of adding them. Who wants to go first? I’m in a slash and burn kind of mood. I think I’ll deal with the most overloaded keys first.

Right, the whole F1 to F12 range. Off you go! Scram! Those are notoriously overloaded and unnecessary. Who’s next? Modifier keys, get lost. Ctrl, alt, Fn, Windows. The lot. But wait, we probably need shift to stay. I want to be able to type in capitals.

Like I said, PrtScr, scroll lock, num lock, break. A key that launches the Windows calculator app? How often have you pressed that key?

Another type of problems come with the keys that change the state of the keyboard. This is problematic because, 1) the state can be changed accidentally, and 2) it’s too hard for the mind to remember what state it’s in for a long period of time. So, Caps lock, you’re just a accident waiting to happen. Get lost!

Let’s loose Ins as well. Like caps lock it’s only purpose is to confuse. Touch it by mistake, and your pc is ready to let you accidentally overwrite the text in your document. Not likely what the user intended.

What about all the non-Latin characters and symbols? We need to still allow the user to enter those, even when Alt-gr is gone. The way the iPhone/iPad keyboard has solved this is a nice compromise. Pressing and holding the key e will present you with a small menu giving you access to ë, é, ê and so on. But that won’t work on a physical keyboard, because that would introduce the need to reach for mouse. FAIL. I suggest holding the E key while pressing the arrow keys will cycle through the different accents and symbols. Today, holding down the E key will produce eeeeeeeeeeeeeeeee. How useful is that? I can’t remember when I last used that “feature”. Except I did just now. Doh!

Ok, MicMute? And two of each of LeftMouse and RightMouse? No way I’m ever going to need two of those. “ThinkVantage”? I’m wondering too what that button does. So far nothing has ever happened when I touch it. Is there a spell I need to say to get it to work, perhaps?

So, were down to this layout:

Mute VolDn VolUp OnOff

Esc Del Home End PgUp PgDn

1 2 3 4 5 6 7 8 9 0 + Backspace

Tab Q W E R T Y U I O P Å Return

A S D F G H J K L Æ Ø * Undo

Shift < Z X C V B N M , . – Shift

Space  Up  Left  Down  Right

 

Perhaps now, there’s room to add buttons that make a difference. Why have we never had an UNDO button? Because when the keyboard was designed, undo was available only in very few pieces of software. Undo was expensive because, to support undo, the pc had to store lots of old state while struggling to keep the current state in memory. It’s not like that anymore.

What about a back button (and perhaps a forward button to go with it?) Navigation is a very common paradigm in recent years. Many phones have back button. The great thing about a back button is that it’s promotes confidence in software: When in doubt, go back! Confidence is good.

My keyboard now looks like this:

Mute VolDn VolUp OnOff

Esc Del Home End PgUp PgDn

1 2 3 4 5 6 7 8 9 0 + Backspace

Tab Q W E R T Y U I O P Å Return

A S D F G H J K L Æ Ø *

Shift < Z X C V B N M , . – Shift

Space  Back  Forward  Up  Left  Down  Right

 

2. Enhance discoverability and learnability of commands.

How to simplify without making interacting with applications much more complex? If I can’t press the F3 key (that’s overloaded with a magnifying glass icon) to quickly search the web, should I then be required to open a browser, click in the search field, type my text and press enter? Adding more keystrokes and mouse clicks seems like a step in the wrong direction.

I think a good solution would be to take up the ideas embedded in launchers like Alfred or Quicksilver (OS X) and Enso or SlickRun (Windows). Here, one key (or key combination) lets you type commands like “go cheap hotels” (SlickRun) to search for cheap hotels. “go” is short for “Google”, here (illegally) used as a verb. Because you don’t have to type the whole name, invoking a command can be almost as fast as using a key combination. In the case where you have forgotten the combination but not the word “Google”, it’s many times faster! There’s a name for this kind of user interface. It’s the graphical command line.

keyboarddriven

 

And what about discoverability? Compared to key combinations, commands within a graphical command line, or GCLI, can be found by typing the first characters of it's name – the list will autocomplete. A qualified guess will get the user closer to discovering the command than randomly guessing a key combination ever will. Also, commands can be made to be smart and context sensitive, so that only appropriate commands will show up in a given context.

quickwebsearch

 

3. Optimize for fast input.

By analyzing which letters most frequently go together in English, Dr. Dvorak redesigned the keyboard in the 1920 to optimize writing speed. Take a close look at this keyboard. Does it look weird to you?

It did to me when some years earlier, I tried re-arranging the keys on my keyboard to Dvorak. After trying to learn it for a two weeks, I reverted to trusty QWERTY. Why? I guess I didn’t really believe at that time that it would be possible to unlearn the old way. But looking at how fast people can be on T9 AND QWERTY, with the right vendor buy in, it’s a possibility. It would take a strong marked player and a bold move. But it’s possible to turn the tides.

 

Who dares take the first step?

It’s going to take some courage to get moving in the right direction. And probably a large-ish company. Canon tried doing something wild and different in 1987 with the Canon Cat. It wasn’t a huge success. But some of the ideas are still sound today.

So who could pull this off? Most likely Apple is the only company who has design muscles and courage to take their simple design approach to the extreme. On the other hand, let’s not wait for others to do it. We can all keep an eye to simplicity and start removing from our own designs until there’s nothing more to take away. Be they graphic design, code design, or product design.

Happy trashing!

Wednesday, December 14, 2011

Workflows or collaborative editing?

Workflows are often requested in enterprise software. They cover areas such as publishing articles in a CMS, sales- and purchase order approval, online registration, shipment, subscription and cancellation. Among many other things. You probably go through several workflows in the course of a work day.

Why do we have workflows? For many different reasons:

  1. Ensure that things are done in the right order. No shipment before payment. No shipment before all items are in stock. No unwrapping of presents before the tree has been lit.
  2. Ensure the right people do their part of the job. For instance, making sure an article is reviewed by PR before being published. Or that each elf makes his part of the wooden truck.
  3. Ensure the right people are informed. For instance a mail is sent to the Finance dept. when a payment is overdue. And Santa is notified when little Molly misbehaves.
  4. Make sure that people don’t overwrite each others changes. For instance the check-in/out mechanism in most content management systems. And making sure that you and your wife don’t both buy a gift for aunt Christie.
  5. Document that standards and procedures are followed to comply with rules and regulations.

Sometimes, workflows are put into place by default, by habit, or by convention. To publish even the simplest change to an article in a Microsoft SharePoint publishing site, for example, you need to perform the following manual actions:

  1. Check out the article
  2. Edit the article
  3. Save the article
  4. Check it in
  5. Click Publish
  6. Fill out the form
  7. Click OK.

That’s ridiculous! And that’s just the standard workflow before a workflow-aholic business architect has added extra steps and flows. What we should be doing instead is simplifying the life of software users.

Let’s think a bit about two categories of workflows: First, those that model the business processes closely, and secondly, those that are applied because of habit, by default or because of fear of being accused of not being in control of our processes. The first category deserves our attention and Christmas Love. Workflows in the second category should be replaced by something more efficient.

Speaking of which, what if, instead of a workflow involving check in/out, your CMS worked like Google Docs, allowing any number of people to work simultaneously on the same document while seeing each others changes updated live on screen? What if you could have the same feature in your backend systems, allowing you to see the other users’ cursors as they type?

If someone makes a mistake (and some one will!), there’s a complete history available. Do you remember the disbanded Google Wave? There was a slider that allowed you to drag a slider left and right and magically watch the letters disappear and reappear in the order they were added by all users. One place you can see this in action is on collabedit.com. Try typing in some text then go to the history tab and drag the slider. Pure magic!

The technology that did the magic in Wave was acquired from Etherpad.com back in the day. Wave didn’t fail because of the real-time collaboration technology but for a lot of different reasons, one being lack of integration with email. After Wave was abandoned, Google open sourced the components that allow for real time collaboration. They are free for anyone to implement in a CMS, or in any web app.

I think it’s time to abandon the old check-in/out workflow. With components like signalR, a simple Wave-like HTML editor can be done in 30 lines of C# code and 60 lines of JavaScript. In other words, it’s ready to be put into production to replace your CMS publishing workflow today. Sumit Maitra posted a demo of these concepts.

One additional benefit is that the collaborative editing paradigm, when implemented right, caters for at much nicer User Experience. Having to ask your colleague to unlock a file or document (while she’s on vacation) is time consuming and frustrating. Fluently typing while she is working on the same document, article, product, campaign or whatever, is good User Experience to me.

To conclude, workflows that model business processes are the important ones. They need our love and attention. Workflows that exist because of an old technical limitation, or because it has become a habit, need to be reconsidered. Workflows that make you comply with laws can perhaps be re-thought and simplified. Which of your workflows fall into which category will have to be assessed case by case. But chances are, if any of your systems rely on the check-in/out mechanism, there’s an easy win right there.

Merry workflow-ho-ho!

Tuesday, July 12, 2011

PicoFX–a dogma for simple frameworks

It’s really easy to be persuaded by a large framework that does everything. Who would want to tie your shoelaces together by choosing a framework, say an ORM, that does not have support for … say enums? (Sorry, EF I couldn’t resist picking on you.) Who can say no to log4net. After all, it’s comprehensive. It can log messages to files, rolling files, databases, event logs, toilet paper and the Times Square Billboard at rush hour?

NHibernate can do ANYTHING you’d want from an ORM. Now that the current EF CTP supports enums, it’s getting there too. But all the extra functionality comes at a price that you’ll have to pay eventually: Added complexity. Listening to “The Rise of the Micro-ORM with Sam Saffron and Rob Conery” on Hanselminutes, I was inspired by their thougths about going back to basics. Both their ORMs support only basic ORM stuff, but the complexity is way down –- and performance is so much better! Rob made me realize that SQL might just be the ideal DSL for writing queries. I had some of the same ideas when back in 2006, I wrote a simple closed-source ORM called Matterhorn. It was a response to the unnecessary complexity that NHibernate 0.3 introduced in my apps. Back then, when I tried to explain the pain points I had with NH, the reaction I was most often met with was “That’s odd. I haven’t had any problemt with it.”

So Micro-orms to me is great news. I like Dapper, Massive and PetaPoco.  Orms are just the beginning. In fact, I think we should expand the notion to “Micro frameworks” to cover all kinds of frameworks from databases to service busses, from IoC containers to rest apis.

The term “Micro framework” might sound a little too much like the Microsoft .NET Micro Framework. So let’s instead go for something even smaller … what about PicoFX?

A dogma needs strict rules. So here are the rules that a framework or api must respect to be a PicoFX:

  • Max 1000 Lines of code (excluding Unit tests)
  • Only one code file that the user can include in any project
  • No dependencies except the .NET BCL
  • Must have open source license
  • Fully unit tested

I challence you to take a look the framework that you wrote. Rip it apart like Rob Eisenberg did with Caliburn. Take the crucial bits and leave the rest out. Boil it down to under 1000 lines. Is it possible? Then you’ve managed to distill the essense of the framework and leave the fat to the dogs. And chances are performance is improving too.

That should be simple enough, right? Kind of. Once the first version is out and the users start requesting additional features and you have all sorts of great ideas as well, trouble will start brewing. Can you add features and stay below 1000 lines? It’s tempting to add the killer feature thinking that 1156 lines won’t hurt. It’s only a bit more than 1000…

Every feature represents an additional tax, that your users will pay. What if you start taking out one feature every time you add one, respecting the 1000 lines limit? You won’t be 100% backwards compatible. That may hurt some. On the other hand, you’ll be keeping your promise of a simple framework that stays simple.

I’d better chip in and start taking my own medicine. I hereby submit PicoIoC, a 280 line fully functional IoC container. It’s heavily inspired by Autofac and supports registration by type, by lambda or by instance. Automatic lifetime handling means PicoIoC will deal with calling Dispose() if your class implmenets IDisposable.  It support abstract factories and Lazy<T>. Yak yak yak. Check it out yourself. I promise you it will stay

I can’t wait to see the PicoFX versions of your framework.

Tuesday, December 28, 2010

QueryMyDesign allows you to query the structure of your own code.

What if you could test your design like this:

from m in Methods.InAssemblyOf<MyClass> where m.CyclomaticComplexity() > 40 select m

or this?:

from t in Types.In(sut) where t.CountUsesOfNamespace("SkinnyDip.Tests") > 0 select t

Then, if placed inside a unit test method, you will be able to verify your design (and ensure against regression):

Assert.Empty(from t in Types.In(sut) where t.CountUsesOfNamespace("SkinnyDip.Tests") > 0 select t);

Why re-invent NDepend? Short answer, I’m not. NDepend can do anything QueryMyDesign can and much much more using its own query language. I just thought it would be interesting to have a small, simple api to express basic structural queries with a LINQ friendly syntax.

This way, you get type safety for free. Plus R# refactorings will work all the way to your queries.

Did I say type safety? Since namespaces are represented as strings by Cecil and since they're not really first class citizens of System.Reflection either, working with namespaces is not nearly as fool-proof as working with types. C# has the built in function typeof(A) but I miss namespaceof(A) and methodof(A.B). Methodof(A) can be faked by using Clarius Labs’ Typed Reflector and the Reflect<MyClass>.GetMethod(c => c.DoStuff()) syntax. I’ve modified the code to return Mono.Cecil.MethodDefinition instead of System.Reflection.MethodBase.

This allows you to write tests of single methods:

Assert.True(Reflect<D>.GetMethod(d => d.UsesE()).CyclomaticComplexity() < 10);

So, what can you learn about your code using QueryMyDesign?

* Number of instructions, number of variables, and cyclomatic complexity of methods, types, namespaces and assemblies.

* All references (methods to methods, types to types, namespaces to namespaces and assemblies to assemblies.)

* Discover cyclic references between types (and the reference graphs between them.)

* Calculate Uncle Bob's metrics such as Instability, Abstractness, Distance From Main Sequence, Amount of Pain, and Amount of Uselesness.

Convienient syntax

If you want to ask specific questions, like “what types does System.String use?”, “does type A depend on type B?” or “Which of my types has any cycles?”, there’s convienient extension methods provided:

var stringDependencies = Types.UsedBy<string>()
bool aUsesB = Reflect<A>.GetType().FindUsesOf<B>().Any();
var typesWithCycles = Types.InAssemblyOf<MyType>().Where(t => t.HasCyclicDependency());

Behind these easily accessible methods lie classes such as MethodDependencyFinder, TypeDependencyFinder and so on. For more advanced scenarios you’ll need those.

Pain and uselessness

If you want to get dirty with the metrics relating to The Main Sequence, things get a little bit less convienient. But only slightly. Metrics like Instability needs to know of both incoming and outgoing dependencies so we need some way of defining the system boundary. Otherwise we won’t be able to find incoming dependencies. We need a Dependency Structure Matrix:

var dsm = new TypeDependencyStructureMatrix(new[] {typeof(SomeClass).Assembly});

This constructor takes a collection of assemblies to search within. When that’s  settled, we can ask questions like:

double i = dsm.GetInstability<SomeClass>();
double a = dsm.GetAbstractness<SomeClass>();
double d = dsm.GetDistanceFromMainSequence<SomeClass>();
double p = dsm.GetAmountOfPain<SomeClass>();
double u = dsm.GetAmountOfUselesness<SomeClass>();

Whether these metrics can tell you precisely what parts of your code hurt is a different discussion. At least they’re easily accessible using QueryMyDesign.

Get the source at GitHub. And remember, it’s alpha quality.

Tuesday, September 9, 2008

Using the repository pattern to achieve persistence ignorance in practice

I recently experimented with migrating a project from Linq2Sql to Linq2NHibernate. It’s a small windows time tracker application that features offline capability.

The original app built a year ago used Linq2Sql’s class designer to create domain classes from existing database tables. Along with the domain classes it created a DataContext class:

public partial class DomainDataContext: System.Data.Linq.DataContext
{
 public System.Data.Linq.Table<customer> Customers
 {
  get { return this.GetTable<customer>(); }
 }
 public System.Data.Linq.Table<project> Projects
 {
  get   { return this.GetTable<project>(); }
 }
}

Tables of T are in fact Microsofts implementation of the repository pattern. I have two issues with Table<T> as a repository implementation. One, I like my repositories to take the shape of a collection more in line with what repositories were originally: A facade that let’s you access data through a collection metaphor. The method names should be Add, Remove and Clear as you would expect from a normal collection. In Linq2Sql MS renamed those to InsertOnSubmit, DeleteOnSubmit and so on.

Issue number two with Table<T> is that the methods Insert/DeleteOnSubmit are not defined in an interface but on Table<T> directly. That means I have to rely on a concrete class. Bad OOD karma! The thing is, these methods are really part of another pattern, Unit of Work. There is a muddy mismatch between the two and a need for a unified way to access data through repositories.

In order to be accessed in a manner closer to real collections, I could let each repository implement ICollection<T>:

public interface Repositories : IDisposable
{
 ICollection<customer> Customers { get; }
 ICollection<project> Projects { get; }
}

That’s all well and dandy as long as my repositories are simple in-memory collections or in-memory collections persisted using Xml. If I want to switch to repositories backed by an Linq2Sql or Linq2NHibernate, troubles arise. The result is that each time a repository is queried, the whole table is loaded into RAM and filtered there. Ooops. The trouble has to do with the way that Linq compiles queries.

Linq is able to choose between running queries in-memory or capturing the query expression in an expression tree then translating it into Sql for execution on the database server. The (not so secret) secret consists of two interfaces, IEnumerable<T> and IQueryable<T>.

If the collection you query against implements IQueryable<T>, then the expression is translated to Sql using Linq2Sql. If the collection implements IEnumerable<T>, the query is run in memory when the GetEnumerator() method is called.

When switching from in-memory collections to ORM backed repositories, I can no longer let my repositories implement ICollection<T> only, since Table<T> and Linq<T> implement IQueryable<T> instead. In other words I’m forced to change my interface to:

public interface Repositories : IDisposable
{
 IQueryable<customer> Customers { get; }
 IQueryable<project> Projects { get; }
}

Only now, I’m back to having repositories that are queryable but do not include any way to add or delete objects.

What I really like is a way to leave my Repositories interface alone while still being able to switch between database persistence, file based persistence, no persistence, pen-and-paper based persistence, coffee based persistence … anyway, you get the point.

What I need is a new interface:

public interface QueryableCollection<t> : IQueryable<t>, ICollection<t> { }

Allowing me to declare my repositories as:

public interface Repositories : IDisposable
{
 QueryableCollection<customer> Customers { get; }
 QueryableCollection<project> Projects { get; }
}

That way I can easily swap persistence mechanism, even have two different schemes running at the same time.

Here are is my repository implementation for NHibernate:

public class NHRepositories : Repositories, ConnectionProvider
{
 private readonly ISession _session;



 public QueryableCollection<customer> Customers
 {
  get { return new NHRepositoryAdapter<customer>(_session); }
 }

 public QueryableCollection<project> Projects
 {
  get { return new NHRepositoryAdapter<project>(_session); }
 }

}

NHRepositoryAdapter exposes NHibernate’s Query<T> as a QueryableCollection<T>:

internal class NHRepositoryAdapter<t> : QueryableCollection<t>
{
 private readonly ISession _session;

 public NHRepositoryAdapter(ISession session)
 {
  _session = session;
 }
 
 public IEnumerator<t> GetEnumerator()
 {
  return _session.Linq<t>().GetEnumerator();
 }
}

To satisfy the in memory collections I made an adapter to expose an IList<T> as a QueryableCollection<T> using Linq’s built-in AsQueryable() method:

public class QueryableList<t> : IList<t>, QueryableCollection<t>
{
 private readonly List<t> _list;
 private readonly IQueryable<t> _queryable;

 public QueryableList()
 {
  _list = new List<t>();
  _queryable = _list.AsQueryable();
 }

 public IEnumerator<t> GetEnumerator()
 {
  return _list.GetEnumerator();
 }

 public Expression Expression
 {
  get { return _queryable.Expression; }
 }

}

Couldn’t I just implement my repositories by inheriting List<T>, implementing IQueryable<T> and then delegating calls to IQueryable<T>’s members to Enumerable.AsQueryable()?. That would save the tedious wrapper code. Unfortunately that results in stack overflow errors when Linq calls the getters for the three properties Expression, Provider and ElementType defined in IQueryable<T>. I suppose the reason is that AsQueryable is in fact an extension method and thus doesn’t obey normal inheritance rules. Calling base.AsQueryable() gives the same result as this.AsQueryable() even though the getters have been overriden in the subclass.

Another concern to air is: Does the persistence mechanism really change often enough to justify this abstraction and added complexity? Not always. In this particular app, yes. One one the requirements is smooth operation online as well as offline. I can achieve that easily using my QueryableCollection interface. When running offline my repositories use xml as storage. When online and when synchronizing it uses NHibernate with a Sql Server database behind.

Another way of achieving offline functionality would be to only let the app talk to a SqlCe 3.5 database via Linq2Sql or Linq2NHibernate and then let ADO.NET Synchronization Services to sync it with the master Sql Server database. Then you wouldn’t need the abstraction I made, but complexity would only be relocated to configuring Synchronization services.

Anyway this solution allows me to maximum flexibility in persistence ignorance. The payback is a new interface and an adapter two adapter classes for each storage mechanism. It’s not feasible in all solutions but can be if you need the ability to manage offline/online synchronization manually or store data in several places using the same repository abstraction.

2 Responses to 'Using the repository pattern to achieve persistence ignorance in practice'
  1. Morten Lyhr said,

    ON SEPTEMBER 9TH, 2008 AT 11:40 PM

    Great post Søren!

    But its not persistence ignorence you have achieved, its ORM ignorence.

    Actually I was wondering how to make a “POCO LINQ” repository that was not tied to any specific ORM. I guess you beat me to it :-)

  2. Rasmus Kromann-Larsen said,

    ON OCTOBER 10TH, 2008 AT 11:50 PM

    Nice post.

    I’m about to play around with LINQ2NHibernate myself, in a LINQ-less solution that was recently kicked up to 3.5. I think your post might be the inspiration for my repositories.

    - Rasmus.

Friday, June 27, 2008

Dear Santa, bring us Boo 1.0

I wish the programming language Boo had a greater momentum and larger user group. I’d love to use it for writing production quality enterprise apps, but I don’t dare. To be frank, even though the authors do an excellent job of adding features and fixing bugs, there’s just substantially fewer hands available, compared to the forces behind C# 3.0 and VB.NET 9.0.

The ideas behind Boo are fresh and experimenting and they let us do great things with little effort. My hands ache every time I have to transform some collection into another using 10 lines of C# 2.0 when I could have done it using 2 lines of Boo. Getting lambda expressions and extension methods in C# 3 is a step forward, but Boo is already moving further ahead and giving us extension properties and a built-in abstract macro facility that enables us to write in-language DSLs.

Still, the risk of switching to Boo for real world apps is too big, and the tool support is too small at this time. Boo also needs to let me define my own generic types and methods before our relationship can move to the serious phase.

I wish there was some way I could support the authors of the Boo programming language. Money? Don’t have that much. Programming time? My family will leave me if I spend more pc-time.

Instead, here are a couple of words of appreciation: Boo brings the best from the functional style languages and the CLR. It provides ultimate power while still keeping tight focus on simplicity.

In a perfect world… (sigh)

Saturday, May 3, 2008

Edit and Continue effectively disabled in Visual Studio 2008

I find edit and continue to be a productivity booster and I use it every day. Or, I used to use it before I got a habit of using LINQ. I also find LINQ to be a productivity booster because I can express my intend at a higher level of abstraction than before LINQ. I rarely write foreach loops anymore since often it’s more brief and to the point to use one of LINQ’s extension methods and lambda expressions.

Whenever you have a method that contains one or more lambda expressions, edit and continue stops working. It’s not that it’s actually disabled in VS. You can go ahead and edit your method when debugging, it just won’t allow you to continue. So it’s effectivelyEdit and NO Continue ™.

It didn’t start as a problem for me, but becoming friends with LINQ and really getting it under my skin means a rough estimate of 75% of my methods contain LINQ code these days. Why don’t my two best friends, LINQ and Edit’n'continue, like each other? I prey the explanation is: It’s hard to do and Microsoft didn’t get it ready before they shipped VS 2008.

Service pack 1 maybe?

2 Responses to 'Edit and Continue effectively disabled in Visual Studio 2008'

Subscribe to comments with RSS or TrackBack to 'Edit and Continue effectively disabled in Visual Studio 2008'.

  1. Morten Lyhr said,

    ON JUNE 13TH, 2008 AT 7:48 PM

    I really dont see the point in E&C?

    Why do I have to use my time in the debugger?

    Stay out of the debugger, with unit test and TDD.

    As usual Jeremy D. Miller — The Shade Tree Developer, sums it up nicely.

    Occasionally you’ll see a claim that TDD == 0 debugging. That’s obviously false, but effective usage of TDD drives debugging time down and that’s still good. From my experience, when the unit tests are granular the need for the debugger goes way down. When I do have to fire up the debugger I debug through the unit tests themselves. Cutting down the scope of any particular debugging session helps remarkably. The caveat is that you really must be doing granular unit tests. A lot of debugging usage is often a cue to rethink how you’re unit testing.

    Taken from http://codebetter.com/blogs/jeremy.miller/archive/2006/03/31/142091.aspx

  2. Soren said,

    ON JUNE 20TH, 2008 AT 11:02 AM

    I’m not a debugger lover :) I’d certainly love to use it less and I too think that doing TDD helps in that regard. But even unit tests and the code under test have to be debugged once in a while.

    Given that a debugger is sometimes necessary, E&C just makes the ride much smoother. The whole experience is more organic, like I’m molding a sculpture with your hands.

    Contrast that with the rigid feeling of writing, compiling, running tests. The pause from the time when you have a thought till the time when it’s effect becomes observable is very small with E&C.

    The point you are making is against relying overly on debugging, not against E&C. A debugger capable of E&C is preferable over one that isn’t.