If you’re in the West Country on July 15th 2010 (that’s tomorrow, as I write), then the best place you could be is Ignite Bristol:
Here, in no particular order, is who and what you can expect on Thursday 15 July at the Tobacco Factory Theatre.
- Lee Cottier, Why can’t men talk about their feelings?
- Martin Poulter, What is Bayesianism and why should you care?
- Amy Whitaker, Toxins in your sex toys
- Sharon Stiles, Mind Blocks Sorted
- Giles Davis, Behavioural Economics, wtf?
- Christina Jones, Privacy & The Web in 2010
- Nigel Legg, Tombwe, Fuaka, Tobacco: got a light, mate?
- Paul Parry, literally.
- Kaz Pasiecznik, Cryptic secrets – unlocking the crossword
- Claire Scantlebury, Big boy branding and small town values
So there we have it! Eclectic enough for you? We’re pretty confident
that there is no other night out anywhere in the world that can boast
that line-up.
Three friends, Dave Cross, Greg McCarroll and Josette Garcia, met at YAPC Europe in Lisbon wanting to update the world on the status of Perl. As Damian Conway, one of the key designers of Perl 6 was giving a talk, we thought we should ask him a few questions. The following questions took a long time to put together, possibly in sympathy with Perl 6.
— Perl Dead
Dave: We often hear people saying that Perl is dead. I assume that you don’t agree, but what do you say to people who try to tell you that?
I usually say “I agree. Sad, isn’t it. Ah, well.” This causes the crazy person to go off quietly, so I can get back to uploading new or improved Perl modules to extend CPAN’s 7GB of open source software (as over 700 other developers also do every month), or attending several of the dozen or more Perl conferences and workshops held all round the world every year, or visiting some of the over 250 Perl user groups in 58 countries around the world (www.pm.org), or interacting with the thousands of like-minded people helping each other study at Perl’s premier learning website (www.perlmonks.org), or participating in other online Perl social and study activities (such as the Perl Iron Man Challenge), or upgrading to the new releases of Perl 5 and Perl 6 (both of which are now on a monthly release cycle), or contributing to the active development processes for either of the two implementations of Perl 5 or any of the four implementations of Perl 6, or teaching the dozens of major new features that have been added to Perl 5.10 and Perl 5.12 over the past three years, or getting involved in the various Perl renaissance activities (www.enlightenedperl.org and www.modernperlbooks.com), or just reading Tim Bunce’s excellent summary of how dead Perl has never, in fact, been more alive.
I can only hope that, when I’m dead too, I’m as vibrant and active and still growing as strongly as Perl is now.
Greg: Many people are faced with managers who see Perl as a dying language – apart from “get a new job,” what advice would you give to them to evangelise Perl?
Well, first of all, see above.
Also, they need to look at the issue from their manager’s perspective. Many Perl developers see their goal as being to develop in Perl; their managers see their goal as being to develop cheaply, effectively, reliably, robustly, maintainably, quickly, and with as little risk as possible. In whatever language achieves all that.
So Perl advocates need to focus their arguments for using Perl on those needs, rather than on their own desire to use Perl. In other words, they need to make a business case for Perl, rather than just a technical case.
Such a business case might include the high number of extremely good programmers available for Perl development, the familiarity and skills of their existing team with Perl, the huge resource of free software that is CPAN, the large number of mature, powerful and scalable frameworks now available for Perl, the speed with which Perl can be used to implement and deploy systems, how the high-level nature of Perl eliminates common low-level problems, the huge amount of high-quality (and often free) support that’s available for Perl, the number of large organizations who have successfully delivered mission-critical systems in Perl (for examples see: www.perltraining.com.au/whyperl.html#who, www.perlfoundation.org/perl5/index.cgi?companies_using_perl, www.london.pm.org/advocacy, www.proudtouseperl.com, or blog.listcentral.me/2009/05/22/companies-that-use-perl)
Selling Perl to your manager is about explaining Perl in terms that your manager cares about: stable, reliable, powerful, efficient, cheap, maintainable, well-supported, and future-proof.
Josette: Recently Martin Drashkov wrote “Why Perl lost It” (http://martin.drashkov.com/2009/11/why-perl-lost-it.html). What do you think of the final statement: “Whether Perl will continue to decline or not is hard to say, though I highly doubt it’ll ever return to its former glory – neither Perl5 nor Perl6 seem to offer much for developers who do not already love Perl.”?
With due respect to Martin, whose other work I admire, I think that his initial premise (that Perl is declining) is extremely dubious and that his final observation is strongly contradicted by the reactions I get whenever I tell people about Perl 6.
I think it’s true that recent versions of both Perl 5 and Perl 6 do have a huge amount to offer those who *do* love Perl, but I also think that Perl 6 at least has a huge amount to tempt those who has previously found Perl 5 unsatisfactory. From its immensely sophisticated OO model to its extraordinarily powerful built-in grammars to its easy-to-use native concurrency and data parallelism to its immensely handy multiple dispatch semantics to its cleanly integrated first-class macro facility to its incredible support for creating domain specific languages, I think Perl 6 has a huge amount to offer *all* developers. We’ve spent a decade stealing the very best idea from the best programming languages, and making them simple and practical for mortal developers to use. I think the result is going to appeal to a lot of people.
By the way, the article you quote is still well worth reading…especially the comments at the end of it, which more or less demolish every point Martin makes.
Greg: One thing a lot of Perl programmers face is large old codebases, full of different coding styles, often difficult to write unit tests for, drive-by refactoring is a common solution, but do you have any silver bullet ideas or even just ideas for understanding and working with these aging codebases?
No silver bullets, but one simple and blindingly obvious principle: “Start Evolving…Today!”.
Refactoring a large heterogeneous codebase is usually neither feasible or even helpful (the word “enbugging” leaps to mind), but at least you can stop adding to the problem as you continue adding to the code.
The simplest approach is to get your team to agree on a coding standard and testing process and then start applying those constraints to any new (or frequently maintained) portions of the codebase. At very least, you’re not making the problem worse and, as the software expands, its percentage of cruddy code will necessarily diminish.
It’s no magic wand, but it does work. Evolving the code style and introducing testing frameworks as you maintain the code focuses your efforts on the parts of the codebase where improvements will most benefit the team; namely, those parts where they most frequently interact with the code. And if they’re consistently using their new coding style and testing frameworks in new development, it will be easy apply them correctly when reworking existing code.
Finally, because they’re refactoring and restyling code that they’re actively maintaining, they’ll be more likely to reimplement it correctly.
— Perl 6 Delay
Dave: In retrospect, is there anything that you and Larry would have done differently with the Perl 6 design process? Is there any way that it could have taken less than nine years?
Sure. We could have done a worse job. We could have simply patched Perl 5 with the most popular requests from the RFC process. We could have accepted the less-clean, less-well-integrated, less-clever design that we had five years ago. We could have let the good be the enemy of the best. But I’m glad we didn’t.
The evidence is that most major new programming languages take about a decade to reach a stable and useful design. C++ did, Java did, Perl 5 did, Haskell did, Python 2.0 did, Standard ML did. ANSI C arguably took two decades to get right, and Lisp took either two or four (depending on whether you think Scheme or Common Lisp Scheme was the final “correct” incarnation).
So when people point to the fact that the Perl 6 design process has taken 10 years, I consider that to be a sign that we did it right.
The only real difference between Perl 6 and most other languages is that we had the courage (or perhaps the foolhardiness) to carry out that inevitable decade-long design process in public. In retrospect it would have been politically cleverer to have worked quietly on it from 2000, announced the project in 2008 and delivered it this year.
Except, of course, that without the help, insight, and feedback of the dozens of highly talented people the public process has attracted, Perl 6 might still have taken ten years to design, without being nearly as good.
Greg: A lot of work has gone into regular expressions and parsing in Perl 6, with XML/YAML available for data and with ‘little languages’ being a niche technology, do you think this focus has been worth it?
Interesting question. You might have asked exactly the same question about Perl 5, whose regex mechanism has had two decades of improvement lavished on it. Has that proven worthwhile? I think so.
As for Perl 6, I think it’s impossible to accurately estimate just how valuable a built-in grammar mechanism is until you actually have one. It’s true that XML and YAML/JSON cover a lot of data representation needs, but there is a vast amount of data that isn’t available in those formats and will never be unless it’s parsed and converted. And a grammar is unquestionably the best way to do that.
As for the niche of DSLs, I think it’s been a niche precisely because the necessary tools simply weren’t available to implement little languages cleanly, easily, and efficiently. One of the reasons we’ve worked so hard on Perl 6’s regexes and grammars is to make developing those DSL interfaces very much easier.
The other reason we’ve spent so much time on regexes and parsing is that Perl 5 regexes, which are easily the most powerful and sophisticated in a mainstream language, also now have a near-fatal problem. As they’ve evolved from the simple syntaxes of the 1970s, they’ve become “trapped” by the ever-diminishing syntax available for new constructs.
Virtually every major improvement to Perl 5 regexes over the past decade has a syntax of the form “(?<something>)”. A modern regex that’s making use of the full power of Perl 5 regular expressions looks more like Lisp than Perl! It’s as if the long-term evolution of the regex syntax had produced 10 useful organs and 30 veriform appendixes.
It was time to redesign the entire organism, so that the simplest syntax could be assigned to the most useful constructs (instead of being monopolized by those constructs that happen to arrive first). We think of this as analogous to finding the optimal Huffman coding for regexes, so that commonly used regex features become easier to write and easier to read.
And when you see how very much easier Perl 6 regexes and grammars are to work with, and even just to read, it’s clear to me that the result was worth the considerable effort required.
— Perl 6 Released
Dave: At YAPC in Lisbon, Patrick announced that a first version of Perl 6 (called Rakudo Star) will be released in Spring of 2010. What are you looking forward to most about seeing Perl 6 out in the wild?
That’s easy: what I’m most looking forward to is not being asked “When will Perl 6 be released???”
But seriously, I’m really looking forward to being able to write production systems in Perl 6. The language is just a joy to code in, and it’s been enormously frustrating to have been able to deploy it only in my head for the last few years. Now that Rakudo is usable, I’ve switched most of my development to Perl 6, and it’s like stepping out of a prison I didn’t even know I was in (exactly as it was when I first moved from Pascal to C, and then again from C to Perl).
The other thing I’m really looking forward to is seeing what the wider Perl community does with this new tool. I have big plans for new projects myself, but I’m far more excited to see the amazing, brilliant, and utterly unexpected wonders our community will create on top of Perl 6.
Greg: Unofficially, but with an educated guess which year do you think Perl 6 will be used in a business critical application in say 10 reasonably sized (5 million+ Euro turnover) companies, and will it be greenfield development or a conversion from existing Perl code?
Whenever it is, it will almost certainly be a fresh codebase (though it might easily be a clean reimplementation of an existing Perl 5, Java, or Python app). I think the vast majority of Perl 6 development will be the creation of new code, rather than the migration of existing Perl 5 code. Simply because Perl 5 will still be there, and still kicking butt, so there won’t be much incentive to migrate existing software.
As to when, I’d say we’re probably looking at three to five years for Perl 6 to achieve that level of penetration, acceptance, and trust.
Greg: Which do you think are the best features Perl 5.x has gained through
Perl 6 work?
Tough question. Personally, the one I love most is by far the least sophisticated: the “say” function that was retrofitted to Perl 5.10. It solves the commonest annoyance in Perl 5: the perennial need to add a newline at the end of most “print” statements. Not very sexy, but it scratches an incessant tiny itch that used to niggle me at least twenty times a day.
I’m also very partial to smartmatching and the given/when construct. I find I use them surprisingly often in my current Perl 5 development.
Dave: When will Perl 6.1 be released? And what will it contain?
Probably at least a year after Perl 6.0, which is itself still six to twelve months away. I’m afraid that’s not really a question I’m competent to answer, it being entirely the bailiwick of the implementers.
As to what we’re likely to add to Perl 6 for the next major increment, hopefully not very much. The whole point of the Perl 6 design is that we’ve tried to add enough power so that anything we missed can be added from within the language, rather than having to add it onto the language itself.
That said, I think the focus of our design efforts from this point will be on Perl 6’s threading and concurrency models, and I expect that Perl 6.1 will be much more sophisticated and complete in that respect.
— Perl / Computing in general / Anything Else
Dave: Which of your many CPAN modules do you think is the most useful? And which is the least?
The most widely used seem to be Parse::RecDescent, Text::Autoformat, and Regexp::Common. Whether that means they’re the most useful, I’m not sure.
From another point of view, you could also argue that the Switch module has been the most useful since, despite its obvious deficiencies (or perhaps because of them!), it eventually led to the addition of native versions of given/switch and smartmatching in Perl 5.10 and Perl 6.
The least useful? That’s actually much harder to determine than it should be. Lingua::Romana::Perligata (Perl in grammatical Latin) and Coy (error messages translated into haiku) are obvious candidates, except that I know that both have actually been used in production.
So it’s probably Acme::Bleach (Perl encoded in pure whitespace). Especially because I know that people have also mistakenly tried to use that module to “protect” their source code. Sigh.
Josette: I keep being told that programs written in Perl are often impossible to read by other Perl programmers and that this problem does not occur with programs written in other languages such as Python. Do you agree with this statement and can you explain why this happens?
I hear this all the time, and it’s just not accurate. I see a lot of different Perl code in many different styles, and I can read most of it without trouble. I see a lot of codebases with self-inconsistent coding styles, which other Perl programmers nevertheless manage to maintain without undue difficulty.
Of course, a standard set of coding conventions is an excellent idea and should be everybody’s first choice. But coding consistently in a readable style does require a certain discipline that doesn’t come naturally to everyone. There’s a necessary learning curve there: for the individual programmer, for programming teams, and for the entire Perl community. I think as a community we’re making good progress, but that the message of consistency and writing-for-readability is still filtering down to some individual developers.
And, yes, I often hear the claim that code written in Python (or Java or Eiffel or any other language less flexible than Perl) is ipso facto more readable. But that claim neglects a fundamental truth: that syntax is only one dimension in which people will express their individuality and inconsistency. And, in many respects, it’s the least important dimension.
Sure, all Python code has to look superficially pretty much the same, but that just shifts the essential complexity of the code to some other dimension. Typically it will manifest elsewhere, in an inconsistent variable or method naming scheme, or in the use of hard-to-comprehend data structures, or in subtle non-standard algorithms, or in weird library APIs, or in overlaying a functional or procedural style on the intrinsic Python OO worldview.
The bottom line is that bad programmers will program unreadably in *any* language. Perl just lets them do so at the simplest and most easily overcome level: syntactically.
Greg: If Perl didn’t exist which language would you be drawn to?
If Perl didn’t exist I think I would have found it necessary to invent it. Probably badly.
That’s in fact more-or-less what I was doing at the time I first discovered Perl: working on the design of a dynamic language called OOK (“Object-Oriented Kernel”). But I abandoned it as soon as I realized that Perl already provided all of the power, convenience, and flexibility that I had been striving for in my own design.
If I weren’t allowed to create my own replacement for a non-existent Perl (and assuming there was consequently no Ruby either), I suspect I would have been forced to choose Python. Though I’m not at all sure I would have been a force for good in that community.
Or perhaps I’d have gravitated towards something like Lua or Sather. Both of them had huge potential, that hasn’t quite been translated into popularity or market share. There might have been real opportunities to contribute to their communities.
Greg: What non-computing books would you recommend programmers to read?
Programming is an intrinsically creative task, so it’s critically important to feed your creativity from outside the discipline. My own interest has always been in new models and metaphors for computation and better ideas for interfaces, so I try and read as widely as I can in the hard sciences (especially physics and mathematics) and in the literature of general design. But that’s me, and most people wouldn’t find inspiration in those same places.
So the general answer, I think is that you need to find books that stretch your brain in unexpected ways, that break you out of your habitual ways of thinking and of viewing the world, that challenge your assumptions and your certainties. Some great example of such books are “The Design of Everyday Things” by Donald Norman, “Freakonomics” by Stephen Dubner and Steven Levitt, “Guns, Germs, and Steel” by Jared Diamond, “The Prince” by Machiavelli, “Catch Me If You Can”, by Frank Abignale, “Lost in the Cosmos” by Walker Percy, or just about anything that Douglas Hofstader has written (sadly, most people seem to stop at “Godel, Escher, Bach”).
Now, I don’t say that I agree with every idea or theory in those particular books, or even with most of them, but I do think that every one of them issues a direct challenge to our entrenched expectations and beliefs. And I think that’s the critical thing.
So much of everyday programming is monotone. You need to transcend that sameness if you want to become a better programmer. And learning to think outside the box (and even just that there’s a box to think outside of!) is essential to that growth.
I think that’s also why so many programmers naturally gravitate to science fiction. Really good SF takes you outside your assumptions in exactly the same way.
Dave: In Lisbon this year you branched out into another kind of training – personal training in the gym. Which type of training do you enjoy more?
I’m a teacher first. It doesn’t matter whether I’m teaching computer science or Perl or Vim or presentation skills or weight training or martial arts or anything else: I just love to teach, love to watch people discover new knowledge in the world and new capacities in themselves.
That said, I’ve been a gym rat longer than I’ve been a hacker, so there’s a certain “first love” aspect to helping people improve their lifting skills and, more importantly, to develop the confidence and mental discipline needed to achieve their physical goals.
Greg: In the spirit of desert island disks, we are sending you off to an island where you can only program in C, but we will allow you take one luxury language feature from Perl, what would it be?
Definitely memory management and garbage collection. I actually quite enjoy the hands-on nature of programming in C, except for the tedious necessity to malloc, realloc, and free my storage … especially for strings.
Josette: Most important, will you revise your old books and write new ones, if so when can we expect something new?
I’m just starting work on a new book…on Perl 6. I hope it will appear some time next year.
As far as second editions go, I’d certainly like to revisit “Perl Best Practices”. I’ve learnt so much more myself about good programming in the past five years. The community’s notion of “best” – and its available tools – have also developed considerably in that time. But, if I were to look at a second edition, it would certainly be a few years further down the track. There are just so many other projects and so few available tuits.
Josette: On August 11th you agreed to give a tutorial on ‘Understanding Regular Expressions’ in London. Can you tell us who would benefit attending such a tutorial? How will it enhance their career?
Regular expressions are a tool that just about every programmer should master, and certainly that every Perl programmer should. Regexes, especially Perl’s, are just too useful, powerful, and efficient to ignore if you have to deal with any kind of text manipulation task.
Yet Perl’s regex mechanism is so powerful that it very often bites the hand that’s trying to employ it. Many developers are actually scared of using regexes and lack any confidence that they can do so correctly or effectively.
I lay most of the blame with the way that regexes are traditionally taught or, rather, with the fact that regexes are often not taught at all…you’re just expected to pick them up as you go along.
That means that most people have a very poor grasp of what regexes actually are and no idea how to design them. The very idea of designing a regex (as you might design a program) seems odd to most people.
So that’s the focus of this forthcoming class: taking current (ab)users of regular expressions back to the fundamentals of what regexes are, how they work, and how they can be designed and optimized (rather than just cloned from existing applications and semi-randomly adapted to new tasks).
Who would benefit from that greater understanding of regexes? Every Perl programmer, because regexes are so integral to getting work done efficiently in Perl. Especially those who have to deal with, recognize, or manipulate any kind of textual or string-based data.
How will it help them? By arming them with a deep understanding of this powerful tool. By reducing their uncertainty and trepidation in deploying or maintaining regexes within their existing code. By increasing their repertoire of programming skills and techniques. In other words, by helping them to become better, more rounded, and more competent programmers.
Josette: I understand that you are travelling between April and August – As the Perl community would love to meet/listen to you, can we publish your itinerary?
Of course. It’s always at http://damian.conway.org/Events/
Gavin Bell is a key figure on the London developer scene. I first came across him when he showed me to the O’Reilly stand at Open Tech at Hammersmith, filling me in on the day’s events with his gentle Northern Irish brogue. I have seen him present half a dozen times and he’s never less than engaging, illuminating and thought-provoking. Which can also be said for his book, Building Social Web Applications which he wrote for O’Reilly. He blogs at Take One Onion and is a wearer of great shirts.
I interviewed Gavin at his offices at Nature in London, where we talked about how the book came about and who its for, among many other things.
Gavin Bell from oreillygmt on Vimeo.
O’Reilly’s Josette Garcia had a cosy chat with Alex Martelli. Together they discussed the past and future of Python, Martelli’s role at Google, what he misses about Italy, working with his wife and how he feels about Erlang:
Q. 1) Python 3000! What is the story behind this huge number?
A. It started as an in-joke, back at the time Windows 2000 was having serious delays — Guido joked that when we did the next major release we should name it “Python 3000” so we could be sure to not miss the target date;-). When we eventually did get started on it, the PEPs (Python Enhancement Proposals) about it were numbered starting with 3000 (to ensure no conflicts with the PEPs for Python 2.*, which are in the low hundreds). Now Guido’s “vanity” license plate (for his beautiful red Prius) is “PY3K”, which of course refers to the same famous number. (Anna and I, for our mundane silver Prius, have a wider-applying “P♥THON” — I love how the heart can be read as “Y” or “love”, since both apply, but there’s no futuristic references there, alas;-).
But, as you note below, 3.0, 3.1 and so forth are how we refer to the actual code releases — the “3K” needs to recede back to a quirky in-joke, as it was all the way from the beginning.
Q. 2) I believe we are on Python 3.0 with 3.1 coming soon?
3.1 is now out (my bad, as it took me a while to answer these interview questions!) and it’s a very substantial improvement on 3.0, to the point that there’s really no reason to even consider 3.0 any more — while Python releases are normally maintained for quite a while after they’re superseded, this is NOT the case for 3.0, which was somewhat in the nature of an “experimental” release for the new Python 3 language. 3.1 is solid and suitable for production use, and WILL be maintained as previous releases were.
Q. 3) Can you tell us what are the differences between Python 2.x and Python 3?
Comparing 2.6 (which already gained most of the new 3.0 features — some of the backwards incompatible changes listed below can be had in 2.6 with `from future import`) with 3.0, there are several key differences (and a host of minor ones):
– print is now a function (not a statement), so ‘print’ is not a keyword any more, and many options have nicer syntax — ‘print(x,file=y)’ instead of the old syntax ‘print>>y, x’, for example
– legacy classes are finally gone, all classes are what since 2.2 we have been calling “new-style” ones (you should never use legacy classes anyway unless your code somehow needs to support Python 2.1 or earlier…!)
– “strings” are now Unicode, like in Java and C# (there are specific new types for string of binary bytes, mutable and not), rather than the 2.*str/unicode distinction; so for example opening a file in text vs binary mode (always important on Windows, but not on Unix-like systems) is now crucial everywhere (‘.read()’ will return different types — text strings [in Unicode] from a file opened in text mode, byte strings from a file opened in binary mode…!)
– lots of methods and functions that used to return lists now return views or iterators (usually no need to materialize the list, call ‘list(whatever)’ in the unusual case where you do need an immediately materialized list) and the special ways to ask for iterators (xrange vs range, somedict.iteritems vs somedict.items) have blessedly disappeared
– comparisons semantics are much simpler and sharper — the strange behavior of 2.* whereby e.g. all ints were less than all strings is gone, now ‘1<“foo”‘ raises an exception — cmp and __cmp__ are gone, as is the old cmp= argument to ‘sort’/’sorted’ (use key= instead, it’s faster on 2.6 too!-)
– the int/long distinction is gone — all ‘int’s are now unbounded (as long’s, only, used to be in Python 2.* — we’ve been slowly and gradually “fading” the distinction for quite a few releases, but it’s finally totally gone!).
– 1/2 now returns 0.5, NOT 0 (“true division”), use // for truncating division (same as `from __future__ import true_division` in 2.*)
– many small syntax improvements: identifier can include non-ASCII letters, annotations are allowed (and preserved but not otherwise used by Python itself) for function arguments and return values, dict comprehensions, set literals and comprehensions, new literal syntax for octal and binary ints (and the new `bytes` type), keywords allowed in `class` statements, new syntax in `def` to specify keyword-only arguments, new keyword `nonlocal`, extended-unpacking (‘a, *b, c = someiterable’), improved syntax for raising and catching exceptions, many syntax simplifications (e.g. ‘<>’ as a synonym of ‘!=’ has disappeared, backticks as a synonym for repr have disappeared, …).
– many built-ins were removed: reload, reduce, coerce, cmp, callable, apply… 3.1 adds many new (typically relatively small) features, roughly the typical “size” of “delta” in a minor release (2.6 doesn’t have those because it was released together with 3.0; no doubt 2.7, once it arrives together with 3.2, will again incorporate as many new features as can be had in ways that are both backwards and forwards compatible)
Q. 4) I previously heard that Python 3 is not backward compatible. Is it true and what does it means for the people who are using Python 2x?
The whole point of making Python 3, rather than 2.(N+1), was the ability to break backwards compatibility and finally remove things that we long thought should be removed — redundancies such as <> equivalent to !=, old stuff like legacy classes, the apply built-in, coercion, int/long distinction, …
Users of Python 2.x for x<6 should first migrate to 2.6 (no harder than any other minor-version migration) to gain most of the new features and other “migration to 3” helpers. Removing deadwood like apply, ‘<>’, etc, should have been done quite a while ago, but now is better than never ;-).
Python 2.6 has a new switch ‘-3′ that warns about likely incompatibilities, and a `2to3` source to source translator to make warning-free Python 2.6 source into working Python 3 source — in as much as we can, but with a good suite of unit-tests for the application, that should be a pretty painless migration, all in all. (If you DON’T have a good suite of unit tests, forget ALL other tasks until you have developed one — seriously, I mean it: code NOT covered by good unit tests is a disaster waiting to happen… actually, on second thoughts, no need for any waiting !-).
The one factor that’s likely to slow down application migration, now that Python 3.1 is a solid production-worthy release, is that Python 3 is likely to be missing for quite a while some of the huge numbers of extensions available for 2.* — porting C-coded extensions has no nice helpers like porting Python sources do. If your extensions are coded in Cython, they can support Python 3 as well as 2.* even today — but I think other popular extension-writing frameworks (SWIG, SIP, Boost::Python, …) do not yet support Python 3, so all extensions written using those will have to wait for their particular framework to gain Python 3 support; extensions coded in pure C will need work on their authors’ or maintainers’ part.
Q. 5. At least two of the best-known Python people work for Google, what does it mean for Google, what does it mean for Python?
Oh, way more than just two — Google obviously has an intense interest in Python (just as it does in other languages widely used by Google, such as C++, Java, and Javascript). But that has no limiting implications for Python, any more than it does for C++, Javascript, or Java — they’re all languages widely used and supported throughout the industry, after all!
This is an important point! Even when Googlers are developing key new Python things like Unladen Swallow (http://code.google.com/p/unladen-swallow/) [[and please note, as an important aside, that none of the owners, committers and contributors of that absolutely crucial project are those you were probably thinking of as “at least two” above ;-) ]], this is done _in Open Source_ — the result of such efforts is NOT secret, Google-proprietary technology, but rather it’s made and kept available for the whole community’s benefit.
[[Note that one of the committers, Fredrik Lundh, a Googler who was also in Florence [presenting Unladen Swallow at Pycon Italia Tre], is a really major Python figure, involved in it since well before I fell in love with it — also the first Pythonista to be ever honored by being named a “bot”, “effbot” in his case — I was third, as “martellibot”, with Tim Peters, “timbot”, chronologically in second place.]]
Q. 6. Your job at Google is described as “Über Technical Lead”. What does this involve and what languages and applications does it cover?
I actually switched almost a year ago from UTL (a highly technical but mostly managerial role) to being an individual contributor (“senior staff technical solutions engineer” if you want the whole scoop — yeah, nowhere as cool a job title, it’s just too verbose !-) — a parallel career step. It’s great to work at a company that makes this easy, rather than one where management is a kind of “trap” from which you just can’t go back to being an IC ;-).
The languages are basically the same: “Python where I can, C++ where I must”, but also Javascript when _that_ is what I must use, to run client-side in the browser, of course !-) — more about this below.
As it happens, I’ve also switched application areas: earlier it was software for cluster management, now it’s software for business intelligence. Some people have a double-take when they think of such a drastic change, but hey, the languages, development methodologies, tools, and most other supporting technologies are the same — the maths are very similar, statistics, machine learning, data mining, Bayesian logic, etc etc — plus, I’ve always been more of a generalist than a specialist anyway :-).
To put it in perhaps more sound-bitey form: I was mostly “building the cloud” (helping build some of the SW tools to control it, administer it, keep a watchful eye on it, keep it in good shape, etc, etc), now I’m helping build the SW tools to keep an eye on how well the cloud (and the apps running on it) perform business-wise, optimize that performance, and so forth. Cloud Computing and Business Intelligence are two key application areas for software today (of course, there are many other important ones!), and I’m pretty happy about playing in these areas (and kind of proud about the role that Python plays there, and how important Bayesian logic is in both, etc, etc).
Q. 7. What is the market share of Python?
Essentially, your guess is as good as mine: there are no reliable statistics. The numbers no doubt vary a lot depending on what application area you’re talking about, of course. For example, I’ve seen a site trying to guess at the technologies used in publicly accessible web sites, based on various hints and artifacts, and Python appears to be the #2 language for that specific purpose — but that’s _way_ below PHP, with PHP at 60% or so and Python in the teens (of the sites using Python, 80% or so appear to be using Django) — still above a host of famous others, including Ruby, Perl, ASP, Java, etc. But that’s for publicly accessible websites: who can guess what the proportion is on enterprises’ internal intranets? If I had to guess I’d say that on such intranets there’s going to be a lot more Java and Microsoft technologies and a lot less PHP (after all, the IT department may have to approve_those_ internal sites), but how would Python’s share vary? No idea.
TIOBE currently has Python at #7 with 4.4% of the market — among dynamic languages, that’s way below PHP’s #4 and 9.3%, but above all others (Perl being down to #8 with 4.2%); but, as you can see from their diagrams, the month by month results are quite noisy — the trends are more stable, with Java around 20%, C around 16%, C++/PHP/VB around 10% each, and Python just below (though by a monthly glitch C# seems to have just passed it this month ;-).
For a brief summary of the issues involved in the near-impossible feat of estimating languages’ market shares, and a short list of URLs of people still attempting this near-impossible feat of estimation (many, alas, very old — guess most have stopped even trying !-), see e.g. the wikipedia entry at http://en.wikipedia.org/wiki/Measuring_programming_language_popularity.
Q.8. When does one use Python and when is it best to use a different language?
As I mentioned, my own rule of thumb is “Python when I can, C++ when I must” (and Javascript when I must run in the user’s browser). The “running in the user’s browser” case, I hope, is pretty easy to understand: Javascript is the only good choice in that case (and for other platforms that only support it, of course, such as Palm Pre and the future Google Chrome OS) — when feasible, I pair Javascript with a strong framework, preferably Dojo (which was in fact largely inspired by Python in many aspects — maybe that’s why I like it !-)
C++ (or C when one needs to respect that constraint, e.g. when working in the Linux or BSD kernels, or on existing open source code that sticks with C, such as the CPython interpreter itself) gives you full responsibility and full control over your memory consumption: when every bit counts, it would be wrong to rely on a garbage-collected language (most modern languages except C++ and C are garbage-collected). If you can possibly afford garbage collection, then by all means avoid non-GC languages — if you can’t possibly afford it, though, then non-GC languages are the only game in town, obviously.
I can’t think of a “normal” case where I, personally, would want to use Java, C#, or other JVM or .NET languages, since Jython and IronPython are so good in those environments.
There will always be interesting exceptions, though: for example, Android uses Java but not the JVM, so the language-choice situation is different for that platform; iPhone native apps (if Javascript isn’t enough) must be in Objective-C; if you write stored procedures to live inside a database, you’ll probably use PL/SQL (or whatever other language your chosen DB engine supports for that purpose); for easy access to rich statistical functionality, R is hard to beat (fortunately Python interfaces well with it).
Functional programming languages are a whole different kettle of fish, and I wouldn’t mind an opportunity one day to write a really challenging application in a FP language, preferably Haskell — so far, though, I’ve been sniffing at them for 25 years, but never really had an occasion to use one in a production environment. Erlang, which you mention later, may be the closest a FP language has come to the mainstream so far (I’m not counting special-purpose languages such as XPath/XSLT), showing that FP languages may have a chance if they can join to their intellectual fascination some real-world, down-to-earth practical advantages (probably in the realm of concurrency, as Erlang has — Haskell for “software transactional memory” approaches might get there too).
Q. 9. What is the relationship between Erlang and Python? (see Erlang + Python, l’unione di due mondi by Lawrence Oluyede)
No relationship, really, despite Lawrence’s excellent attempts: Erlang is its own world, very interesting for its scalability and robustness in highly concurrent workloads (showing its long background as a language born squarely inside the telecom industry, I guess). However, Erlang’s clumsy syntax and some of its approaches (strings as lists of characters — like, alas, Haskell) have very little reason for being the way they are, except, of course, history. There is clearly a lot of mutual interest — 1.8 million search hits for the two words together, almost half of the hits for Erlang alone and more than for, say, Python and Fortran together! — and I recommend e.g. http://muharem.wordpress.com/2007/07/31/erlang-vs-stackless-python-a-first-benchmark/, Muharem Hrnjadovic’s benchmarking of them (specifically using Stackless Python) from a couple years ago, and Reia, http://wiki.reia-lang.org/wiki/Reia_Programming_Language, an attempt to build a Pythonic syntax (and some Pythonic semantics) on top of the Erlang virtual machine (BEAM).
Q. 10. In Florence, you said that you spent 365 days speaking English and now you need to wash your mouth in the Arno – do you miss Italy?
Well, 365 days/year speaking English was an overbid on my part if that’s exactly what I said (as long as I do manage to spend a couple weeks a year in Italy, it’s more like 350 days/year of English !-)
“Lavare i panni nell’Arno” (your clothes, not your mouth, and I’m 100% sure I couldn’t misquote or mistranslate THAT one!-) was Alessandro Manzoni’s expression (he was our greatest novelist according to the most widespread opinion) as to why he needed to rewrite his masterpiece, cleansing it of northern-Italian semi-dialectal influences in favor of a purer “Tuscan” kind of Italian. So he did, and the result is indeed an awesome book, though I may doubt how important the exact choice of slightly dialectal inflection may have been (other superb Italian writers – Calvino, Gadda, Pirandello, Fo, Tombari, Verga, … – wrote in very-recognizably non-Tuscan inflections, after all !-). I guess that in Manzoni’s time, with Italy just having been reunited into a single state, promoting a single “Italian” image was overwhelmingly important.
There are a lot of things I miss about Italy, though California has its own charms — our incredibly sweet language most of all, probably. Those who know me might be surprised that other things don’t rank higher — what about the food, the wine, the Alps…?
Amazingly enough, I find the Sierras, the Cascades range, and the Rockies, to be almost as awesome and breathtaking as the Alps — sure, Cervino at dawn is one sight you’ll never forget, but Shasta at sunset isn’t ALL that far !-)
Availability of superb Italian wines and food (at very reasonable prices) is surprisingly good in this part of California, too (especially since Anna, in her brief residence in Italy, developed a great skill in Italian cooking, and she hones it regularly with US Food Network superstar chefs such as Mario Battali and Giada De Laurentis — Italians, of course !-) — our favorite grocery store, easy walking distance from home, is Piazza’s, a family-run grocery currently owned by second-generation Americans but founded by their granddad, an immigrant from Italy — great selection of wines and foods of all kinds, but Italian ones in particular; and sometimes we shop at Ferrari’s (that one was founded by an emigrant from very close to my hometown, just like the famous car company of the same name ;-).
Donato Scotti just opened a new and absolutely delightful place in Redwood City, about 10 miles from my home, and, besides the great Italian, esp. northern-Italian, cooking that made him famous at La Strada in Palo Alto – a place which still runs just fine – that’s finally a place to get real Italian “aperitivi” (mildly alcoholic pre-dinner drinks and lots of yummy fingerfood munchies), which I _had_ been missing a bit.
I get excellent operas (most of them Italian ones) in San Jose and at the local “grassroots” West Bay Opera in Palo Alto and Mountain View; I can get really decent gelato (if I ever miss it compared with American ice cream, which, however, has become awesome in its own way at places such as Rick’s in Palo Alto — next door to Piazza’s, as it happens ;-) also in easy walking distance from home; north-Eastern Italy coffee (I’m from the north-East) dominates all around (including the free espresso machines at work ;-) ; YouTube gives me more access to my favorite Italian singers (Guccini, Ligabue, Branduardi, Battiato, …) than I ever had back when I was living in Italy… !-)
When I get a day in Bologna, I splurge on “Pane Comune” (the “common bread” of Bologna, impossible to find anywhere in Italy outside of a 20-miles radius or so from my hometown), “ragnini” (another kind of bread, dry, thin, vaguely grissini-like), and “tagliatelle al ragu” (one pasta sauce that’s almost impossible to do right anywhere BUT in Bologna — so this isn’t so much about Italy as about specifically my hometown !-) — but even in Florence I can’t get _those_, so… ;-)
Q.11. I believe you are writing a new edition of Python in a Nutshell. What is the publishing process?
Anna and I are writing “Python 3 in a Nutshell” (that’s a new book, not a new edition of the existing “Python in a Nutshell”; the latter may be warranted, in the future, to cover newer Python 2.* versions since the 2.4 covered in the 2nd ed of “Python in a Nutshell”), to be published at first as a “rough cut”, an ebook-only edition; we hope to make the “rough cut” by Christmas (though Python 3.1 sent a bit of a spanner in the works;-), eventually leading to a paper edition next year. We’re using XML Docbook, and XML-Mind as our main editor (with a few plugins from O’Reilly), though XML is easy to analyze and process via Python scripts when we want to check something in particular, of course;-); and hg (Mercurial) to keep track of revisions, as opposed to svn which we used in the past (hg, like other DVCS, is vastly superior if you ever find yourself writing/developing on, say, an isolated laptop with no net access: you can keep committing at key “worth saving” points, rather than having to wait until you have connectivity again — I’m told that git and bazaar are just as good [better, their proponents say;-)] but hg’s the one I’ve been trying and it makes me very happy — it’s also what Python is switching to [from svn], and code.google.com hosting is now supporting [as an alternative to svn, which we still do support of course).
Q. 12. You have co-authored books with your wife, Anna Martelli Ravenscroft. Does this cause strains on marital harmony?
Absolutely not — for the right kind of marriage. Remember we read the Zen of Python as part of our wedding readings 5 years ago — we DO believe it has great ideas that would help any marriage. Beautiful is better than ugly, simple is better than complex, explicit is better than implicit, and so on.
We fought epic, no-holds-barred battles over the wording of certain paragraphs, the structure of some chapters, the order in which to present a few of the many sets of examples and recipes — and, a result, we both emerged loving each other even more deeply, AND the book came out much better than if we were practicing the normal courteous restraint and compromise that co-authors generally have to practice with each other.
Pre-reqs for such an approach, and towards such wonderful results, is that both spouses/co-authors must be in love with the English language (in both cases, a love dating from far before we ever met), in love with the subject matter (Python, in our case), AND much more interested in having the “true” or “most effective” solution and approach emerge, than in “being right”; being in love with each other, with enormous mutual respect, also helps, as does having subtly different focus (very oriented to the user/reader, for her; very oriented to the “plumbing”, the inner working of the technology and its logic, for me) — differences that are synergistic and complementary, not inimical.
We’ve had the opportunity to chat with other married couples who had engaged in similarly successful endeavors (always technical books, simply because that’s the field we’re both in and so these are the kinds of people we tend to meet!, but on a very wide spread of subjects), and most of these aspects do appear to generalize.
Q. 13. Python continues to attract more and more interest from programmers skilled in other languages. Have you any learning recommendations for people moving to Python?
My top recommendation is to consider that you probably don’t have to MOVE to Python, in many cases: Python can probably play nicely with the languages you used to prefer (C++, Java, Objective C, C#, Fortran, …), so you can “have your cake and eat it too” — keep using the frameworks, libraries and tools you know and love, use Python to keep it all together and flowing smoothly towards your applications’ goals. This is probably not the case if you’re coming from Perl, Tcl, Scheme, PHP, Lua, or Ruby — these are languages whose favorite niches vastly overlap with Python’s own, so in this case a clean break, “a move” as you put it, may in fact be preferable.
The second-from-the-top recommendation is to bend over backwards to NOT program in Python “as if” you were programming in your previous favorite language — this applies to ANY case where a programmer is adding a new language to their quiver, but more so when the new language is especially powerful and flexible: I’ve seen people “do Cobol in Java”, “do Fortran in C++”, “do C in Perl”, etc, but “do X in Python” is scarily widespread for many values of X. The more different languages you’re skilled with, and the more idiomatically you’ve learned to employ each of them, the less this particular risk becomes — but especially if Python is just your second language, *beware* of striving to use it “just as if it was” PHP, or Perl, or Java, or… it _isn’t_ — books like Python in a Nutshell and Python Cookbook will help you pick up the idioms and their reason for being, so you can internalize them and use them properly.
The third-from-the top recommendation is kind of the counterpoint to the second one: don’t imagine you have to forget all you’ve learned so far — at a sufficiently high level of abstraction many of your existing best practices will stand you in good stead! Most design patterns are still very useful in Python (though some are superseded by the language’s built-in facilities, that’s the exception, not the rule) — watch my YouTube videos on design patterns in Python to get an idea. For example, just because you CAN (in an emergency) “monkey-patch”, doesn’t mean you SHOULD rely on that troublesome technique where you can possibly avoid it — Dependency Injection still has its extremely important role, for both testability and extendability. Your best practices in overall system architecture, testing, and development methodologies are still just as important — spec-driven design, merciless refactoring, continuous build, pair programming if that’s what you like, good release engineering, security audits, load-stress tests, etc, etc.
In the end, programming is something _human beings_ do — Python has done its best to be the most pleasant, productive language for human beings, fitting their brains and their needs with a carefully balanced mix of flexibility and rigor, of simplicity and power, of readability and conciseness; but all practices that address human beings characteristics (pervasive code reviews, automated testing/building/deployment, revision control and issue tracking, obsessively regular and frequent check-backs with the intended user[s], &c), are still just as indispensable in Python as they were in ANY other language!-)
Maker Faire was the most open, buzzing, hearty, heart-warming, dazzling and jaw-dropping display of invention, innovation, creativity, geekery and genial eccentricity you could ever wish to come across. It was a hardware hacking fest of the highest order.
I was on the Make stand with my colleagues Graham Cameron, Simon Chappell, Alice Anderson and Josette Garcia. Between serving customers, we all got the chance to explore the fair, to take in the copious exhibits and marvel at the steady stream of thrilled attendees. There was a lovely scattering of old friends amid the ocean of people we didn’t know. I think I counted six people who spoke at Ignite in Leeds, five who spoke at Ignite London 2, plus I don’t know how many who had been at the Open Hardware Conference at Nesta in December.
But the heartening thing to see was the numerous parents and grandparents with kids, both boys and girls, all giddy at the idea of making something. I spoke to people who had travelled all the way from Spain and Germany just for the weekend (those of you who didn’t make it up to Newcastle from London because it was too far – shame on you! And those that did make it – wasn’t it worth it!) There were three generations of families wandering around, taking in the wonder of it all, sharing knowledge and enthusiasm in equal measure. Someone asked me if I knew whether Newcastle had a strong Maker scene going for it – looking round, we concluded that if it didn’t now, then in 5 years time, when all those kids with a soldering iron in their hands for the first time had grown up a bit, it certainly would have.
Alice and I bought ourselves a Maker kit each and spent more time than we could justify soldering and wire-stripping until we could call ourselves Entry Level Makers. Actually, Alice finished her musical pencil at least an hour before I completed my brain machine, so she clearly has Maker Chops. I claim I was slower because I was photo-documenting each step I took, but really, how long does it take to turn a camera on and off and shoot a few snaps? I have to conclude Alice has more aptitude for being a Maker than I do. Still, even though we were building kits someone else had put together, and even though we had adult supervision checking our work, (thanks, Mitch, Jimmie, Ken), we had great fun and felt great satisfaction when our contraptions worked.
So what is it about Maker Faire that appeals to people? It’s about seeing something tangible and real that you made with your own hands, and marveling that this object which didn’t exist two hours ago is there because *you made it*. It’s about taking control of your environment in a world where we’re encouraged to accept what we’re given. It’s about discovering the possibilities of the possible, as if with each solder connection, corresponding nodes in the brain join up and show the whole panoply of things that are available to us. It’s about validation for those people who have been out of step their whole lives that the tune they are hearing in their head is beautiful, and that those hours spent knee deep in microprocessors or code or chemicals was not a waste of time, but was a significant contribution toward building a world that is better than the one we have now, that the past was fun and interesting, and the present is exhilarating, but the future can be somewhere that is truly magical, and – more importantly than anything else – we’ll all be allowed to participate in it because the makers and hackers have given us the tools to do so.
It was impossible not to dream grand dreams at Maker Faire.
I was witness to the Maker Faire debut of Britain’s Premiere Steampunk band, Ghost Fire. Thoroughly enjoyable they were, too, and undoubtedly the best dressed band I’ve come across.
Thanks to every Maker there who took time to explain to me and thousands of others exactly what their contraption was, how it was built and why they thought it mattered. The gush of enthusiasm that welcomed us as we approached any given stall never waned throughout the whole weekend. Thanks ever so much to the organisers, who did a superb job putting Maker Faire on, including our own Graham Cameron. Graham took a fine array of snaps, which are up on the Maker Faire UK Flickr site. There are more of mine on the O’ReillyGMT Flickr site.
It just remains to anticipate next year’s Maker Faire. I can’t wait.
Rachel Davies is the co-author of Agile Coaching, written in conjunction with Liz Sedley, which was published by Pragmatic Programmers. Rachel is a director of the Agile Alliance and the Principle Agile Coach for Agile Experience.
I spoke to her at QCon about her book, about Agile Development, how Agile Coaching works and why Agile Coaches are in the business of working themselves out of a job.
If you’re in Oxford region, Rachel is speaking at ACCU tomorrow (Friday 16th April 2010) about Understanding User Stories.
After a period of radio silence here we are back on air more resolute than ever! There’s a news however that we’d like to tell to everyone in the Python community.
The Python Italia association has worked in the shadows for months, then a voting took place between members of Europython organization and the two candidate teams, Italy and Germany.
It’s with great pleasure that we’re announcing the confirmation of
Florence as official venue for Europython 2011!
Python Italia APS
Our proposal won and the hard work done in the latest months has been rewarded assigning to Italy the organization of the 2011 edition of Europython.
We’re still in the early stages but we’ve identified a couple of possible venues for the conference really close to the railway station and we’re trying to secure them and develop partnerships.
Meanwhile, rest assured that the machines for the next PyCon Italy are well oiled and we’ve already started working on that; we’re finishing the new graphic layout of the logo and soon you’ll be able to browse the new version of the website.
In addition to the blog you can follow us:
- either by becoming a fan on Facebook
- or following us on the Twitter account
If you wish to know more about the organization you can subscribe the newsletter.
Stay tuned for updates about Europython and the next PyCon Italia conference.
Christian Crumlish is the curator of the Yahoo! Design Pattern Library and co-author of Designing Social Interfaces. Christian spoke in London and Berlin in January 2010, when he kindly agreed to be interviewed:
CS: What was your route into technology? What were your founding interests that have proven useful over the years?
CC: I was fascinated by computers at a young age, although I had a hazy idea of how they actually worked. In fourth grade (age 9) I “designed” a combined computer and fortune-telling machine. My schools were very progressive with technology so I was able to play with a RSTS time share system on a DEC PDP 11/30 when I was still quite young, and it only got nerdlier from there.
In college I studied philosophy, focusing on language and logic and mathematics, writing a thesis on the function of metaphor, and while it was far from a vocational degree I find that it has stood me in good stead over the years.
You talk about 5 principles that developed into Designing Social Interfaces – Pave the Cowpaths, Talk Like a Person, Play Well With Others, Learn from Games, Respect the Ethical Dimension. How did you come by these priniciples, and how did you realise they could be expanded into a book?
Hmm, it was more the other way around. First there was a project of trying to identify and organize social design patterns. I had this taxonomy that I took around to various events and got people to help me work with and it kept growing. At some point I think Havi Hoffman from Yahoo! Press suggested it might make a good addition to the classic O’Reilly interface pattern books that have come before and Mary Treseler at O’Reilly agreed and we turned it into a book proposal.
In the meantime, I had this long extended lunchtime conversation with George Oates, one of the original designers of Flickr, to try to suss out some over-arching principles of social design, and some of those ideas (particularly the parts about talking like a person) came from that conversation. Other principles more or less presented themselves, cropping up in numerous contexts. There are actually more than five principles in the relevant book chapter but you can’t overload people in a talk!
When you wrote Designing Social Interfaces, whose work was most influential to you?
Well, beyond my co-author (because we really got intertwined in our thinking and writing styles) I’d say that I was influenced by Jenifer Tidwell (who wrote Designing Interfaces for O’Reilly), Christopher Alexander (who wrote A Pattern Language and kicked off the patterns movement), and Ward Cunningham and his collaborators at the Portland Pattern Repository. Outside of technical and instructional writing my influences come primarily from fiction (John O’Hara, Borges, Nabokov, DeLillo) and music (Robert Hunter, Elvis Costello).
How were your own ideas for design patterns and social interfaces shaped by your experience?
Almost entirely! That is to say, I stumbled into design when the Web came around so I wasn’t trained in a studio. I was influenced by the stunning graphic design environment of the New York in the late sixties and seventies but all of my thinking about design up to that point (the mid-90s) was empirical and intuitive. As I learned more about actually making interfaces and planning experiences for people I took myself back to school, so to speak, by reading whatever I could find that was recommended to me and by grilling anyone I ever met who seemed to have a clue.
When I started on the Web, it was hard to incorporate social experiences. I started a magazine with some friends and wished there was an easy way to add comments to each article, to enable readers to join the community and perhaps even contribute to the ‘zine, etc., but at the time I was literally hand-coding all the pages and I didn’t have the first idea how to add those other features.
I stayed on top of CMS evolution and blogs and wikis and community/collaborative-filtering applications always looking for the suite of tools that would someday help me recreate the sort of content-driven community I had been unable to sustain in the 90s. Along the way I learned that there was more to it than simply integrating the correct code module, and that kindling and supporting a community is a lot harder than it looks.
Over the past ten years a lot of smart people have written essays and blog posts and books and magazine articles about the elements of social design and I’ve been reading it all.
When I came to Yahoo! we were sitting on these various disconnected social properties – some homegrown, others acquired – and it occurred to me that I might make good use of my tenure as curator of the pattern library by trying to wrap my arms around the current thinking on the key elements of social design and how it differs from designing for individual users, so if anything it felt past time to get to work on this project by the time I finally had a good excuse (my job!) to do so.
Open Source is about scratching your own itch – how do Yahoo! use the Yahoo! Pattern Library?
Well, the library began as a non-heavyhanded way of circulating the best of our knowledge about how to design common elements in our interfaces, and to try to prevent the constant reinvention of the wheel that can happen in a large, loosely connected organization. It still functions that way. We also share more and more of it with the web at large, which I find tends to drive better feedback and helps position us as a contributor to the ongoing public conversation about how to make a better web.
You’re the 3rd curator of the Yahoo! Pattern Library – how does your custodianship differ from your predecessors, and how do you see them developing in the future?
Well, first of all Erin Malone founded and presided over the library for the first three or four years of its existence and established an indelible mark on how it unfolded. She recruited Matt Leacock, a brilliant community and gamer designer, to be the first curator and he did the yeoman work to establish the fundamental set of patterns, to build the library (along with Drupal maven Chanel Wheeler) and to establish initial processes for writing, revising, and vetting patterns.
The second curator, Bill Scott, is a legendary user interface engineer who co-wrote O’Reilly’s Designing Web Interfaces, and who played a big role in evangelizing intelligent use of rich interaction (using Ajax and other clusters of technologies) on the web. He is now running the UI team at Netflix and I bounced ideas off him regularly. Bill presided over the process of taking the library public, alongside its sister resource, the Yahoo! User Interface Library, as well.
When I came on board there had been a slight gap and the pattern process had ground somewhat to a halt, so one thing I think about a lot now is succession planning and putting in place a handbook and some tools that will enable the next curator to get up to speed and manage the mechanics of publishing patterns without having to do quite so much of the command-line work and hand-editing of JSON files that I do today.
Aside from modernizing the processes and focusing on the social design patterns collection, I think my other legacy is bringing more and more of the internal library out to the public through the Yahoo! Developer Network. When I started I believe we had published 24 patterns (I should check that) and as of today we have 59 in there and plenty more coming.
Design patterns came out of the world of architecture – why weren’t design patterns adopted as a discipline by architects, and why did they become so important to technology and computing? In that respect, what could architects learn from the working practices of computer programmers?
That’s a great question. I often ask architects about this and the sense I get is that they view Alexander as a throwback to a modernist approach that is somewhat prescriptive and strict in certain ways, and perhaps a bit idealistic and unrealistic in others. They often claim that “no one uses the patterns today.” This is probably true but I’ve heard that some builders and landscape architects, for example, actually do. Another view is that Alexander was trying to demystify a profession and relate it to a language that anyone could learn, a sort of western feng shui. If this is the case, then perhaps this idea would be threatening to people who are trained and authorized to do this work? Or perhaps they simply wouldn’t believe that this is a good idea or even possible?
Alexander himself has been pretty savvy about computer technology over the years and he personally exhorted the developer community to strive for the sort of “timeless qualities” that he reaches for in his own work. I think some programmers took up this challenge and were inspired. I’ve been to some pattern language conferences and people take this stuff very seriously! There is an almost mystical bent to the way some people really try to get down to the bedrock principles and forces at work in defining these patterns.
I honestly don’t know why it took root with software except that I imagine it met some unspoken need for a kind of meta-structure to help decide when to use various techniques and to understand the consequences of those choices.
For user interface development and product design, I think patterns also help as a sort of lingua franca that communicates to software developers that design isn’t simply an intuitive, magical, artsy thing to do but that it also has principles and reasons for things and systems. Kind of a, “Hey, we have design patterns too” message. And frankly, I think if anything the social design patterns we’ve mapped out in our book and wiki probably come closest yet to the sorts of patterns Alexander identified. In a virtual way, they get at the same sort of challenge: how to create livable “spaces” in which people will relate to each other on a human scale. I mean, don’t our web sites also need “small public squares” and “dancing in the street”?
A short video from Maker Faire, Newcastle 2010:
Internet live event!
Launch and flight of meteo hot air balloon – from Codebits to Spacebits!
As I am sure you are aware, I like codebits, the yearly meeting in Lisbon – 72 hours non stop of overdosing on pizzas, coke/red bull and of course code. Talks, Workshops, Quizzes, Fun and of course CODE – it is all there!
I have now learnt that in their spare time, some of the organizing team have gone to Spacebits! – here’s what they say:
“We’re a group of nerds who are not afraid of making a fool of ourselves. Deeply inspired by recent amateur ballooning projects seen on the Web we have decided that, like Obama, we could do it. We’re aiming to launch a meteo probe balloon, loaded with geekish software and electronics, into near space from somewhere in Portugal.”
There have been several personal HABs launched throughout Europe and the US in the last months – some of which have hit the news. So why build yet another balloon? I am told this project is different. Not only the first balloon going as high as 30 km in Portugal but most importantly:
- The Balloon will be sending its coordinates and sensors measurements live to the Internet, (http://spacebits.eu/). You’ll be able to follow the whole trip for about 2 hours on a live real-time web dashboard, complete with on-board instruments, an interactive map that moves accordingly and a few surprises. It’ll be a live Internet event.
- Spacebits will tweet it’s status and coordinates to Twitter, in real-time too, over at @flyspacebits
- And much more
Scheduled launch: 30th May, 2010 at Castro Verde, Portugal.
Check Spacebits’ presence on Facebook, and watch the video of the test flight: http://videos.sapo.pt/M9LyzEryEhjP5dkrd1QF