Tag Archives: Microsoft

On a Feeling of Ownership, and How it Changes With Size

Through the years, a fair amount of my projects have been with small companies and startups. One thing I notice almost universally, is that members of smaller teams tend to have hunger for success, and a real feeling of ownership over their product. They take personal pride and consider themselves invested in the project.

That tends to change as the team size grows, for better and worse. On one hand, relying on “heroes” can be a dangerous thing, and people need to learn to pace themselves out. So a maturing and growing team can be a step in the right direction. On the other hand, it’s hard to see that desire for success and personal sense of pride in a product die off.

Everyone who has seen a big struggling project knows that feeling. “The wheels of bureaucracy are turning, and all I can do is what the product owner has asked of my piece. Suggesting improvements is just going to cause argument. And besides, why stick my neck out and risk being listed as one of the causes for project failure?” Also, there is a big difference between the amount of responsibility you feel when you are part of a successful team of 10, compared to a successful team of 100 or more. And projects of that size face unrealistic expectations and rarely feel successful.

How do you change this?

Maybe you don’t. I was once part of a startup, the 3rd man in. There was the founder (with a marketing background), a designer / developer, and I joined to help with the the rest of developing. The feeling of pressure and control that we all had on that project is something I’ve almost never felt on larger teams.

However, there’s no need to throw in the proverbial towel. There are techniques to strike a balance. Agile methodologies usually push for smaller teams and more distributed responsibility. And products can be broken into smaller pieces using a module or a SOA approach. (Yes, putting SOA into an agile approach is a challenge, and possibly a contradiction).

In the context of your business and its goals, you have to find the balance. All aspects of the business can be controlled and dictated from the top, but your subordinates aren’t going to innovate and grow organically, and they are only going to work extra hours if you force them. For a mature business with stable returns, that model probably fits. But in industries that face a lot of change and require innovation, you probably need to risk the destabilization of delegating out large amounts of responsibility to small teams. You will have less consistent results, but they will hit some homeruns.

I think Microsoft has seen this in recent years. It’s a common complaint that the teams don’t coordinate and share information there, but they have developed some interesting products. Their having been more surprises than there were under the old centralized management of the Gates’ days. Look at products like ASP.Net MVC, Win Phone 7, Windows 8, WebMatrix and Azure. Not the most integrated products, and some are not commercially successful yet (WP7), but they were more surprising and interesting than a lot of prior products.

The real outlier is Apple. Steve Jobs flew in the face of everything I just described. He notoriously kept the decision making power to himself, yet left Apple workers with a feeling of investment and devotion to the company. Even the customers feel invested. I can’t imagine a Windows sticker on the car of a person whose pay isn’t from or closely related to Microsoft, but I see a lot of cars with apple stickers in the back window.

So as a business grows, you can either try to keep the teams small and empower them to feel invested, or find the next Steve Jobs. Best of luck if you chose the former. Sometimes growth is the mistake. There are industries (like service / partner type models) where growth can risk lowering the profit share per owner / partner / investor. See Managing the Professional Service Firm for more on that risk.

What do you think? What has your experience been? Comment below.

New Questions the Build Conference Causes

Build answered a lot of questions for developers who wondered about the future of the Microsoft platform. WinRT is the new API for “Metro ” applications, which are streamlined enable both tablet and pc inputs. These applications can be built in .Net or native C++ using XAML, or in JavaScript with Html5. The traditional desktop interface is still available and it still runs all sorts of full .Net and Win32 applications.

But I’m left with a lot of new questions…

Silverlight is dead but not dead

Silverlight is alive on the desktop version of Windows 8. And it’s easily ported to Metro. But it seems like a big deal to me that the Metro version of IE won’t support Silverlight and Flash.

XNA is dead but not dead

XNA is not supported for Metro games. Given there is no XBox touch screen, maybe the point is that the input libraries could never be made compatible with Metro. Still, that seems like a big deal. C++/DirectX works on the desktop, on metro, and for the 360.

There is no legacy layer for ARM, but is there a Desktop mode

Will the desktop mode even run for ARM? It’s known x86 binaries won’t run on ARM, but will the desktop be recompiled for ARM? Because if not, then all the non-metro supported paradigms (Silverlight, XNA, WPF & more) can’t be recompiled to run on the ARM version. If you have an ARM tablet, you can’t use a flash site or silverlight site. Even if docked. The iPad has that restriction, but keep in mind the iPad was never marketed as a tablet / dock-able laptop like these new Windows 8 machines are. In other words, are desktop applications now “fat binaries”? Which leads too…

You can write Windows desktop applications in almost any language, what about Metro apps?

Did they also just close the door on people who write in Java, Python, Ruby etc? There are windows bindings or even portable UI toolkits (think GTK+) that run on Windows and enable a variety of developers of other languages to write Windows apps. Are they now only able to write desktop mode apps for consumers who have x86 based machines? Or will Microsoft help foster a community of bindings to WinRT for a variety of languages.

Speaking of, anyone notice that F# was not on that diagram as an option for metro apps? They didn’t say .Net, they said C# and VB.Net.

There was a lot of exciting news, and it’s generally a good direction. But they definitely shook up the developer community and didn’t answer everything. What do you think?

Understanding CPUs and the Business of CPUs Better

I’ve been reading Jon Stokes’ Inside the Machine, and it’s a very good read. In particular I was struck by a couple of simple aspects of how CPUs work.

ISA

First, let’s discuss ISAs (instruction set architecture). x86 is a famous one created by Intel. POWER is an ISA created by IBM. PowerPC was created by IBM, Motorola and Apple. ISAs stay may evolve, but stay relatively consistent (usually backwards compatible) as new CPU designs that use that ISA are created. For developers, think of the ISA as the API (application programming interface) of the chip. This is because, the implementation can vary drastically. For example, many x86 processors take the complicated instructions x86 allows for, and executes them as a series of sub instructions (RISC-like). As a programmer of any language (including assembly), you only care that the ISA is still the same, now how the work is done.

ISAs are disconnected from manufacturers. They can be licensed. While Intel comes to mind when you think of x86, AMD produces chips as well. ARM chips are licensed and produced by all kinds of manufacturers, including Qualcomm, Apple and more.

So what happens when a device or platform manufacturer changes processor? Let’s give a couple of examples. In an environment where backwards compatibility is paramount, it’s very hard to change ISAs. Microsoft has yet to do it, although the upcoming Windows 8 will support ARM. They will deal with the issue that Apple dealt with when the Macintosh line switched from PowerPC to x86/x64 chips. Apple had to provide a software compatibility layer (named Rosetta). Appropriately named, it translated the low level language of PowerPC instructions to x86/x64 instructions. Eventually, Apple made it’s development tools optionally support “Universal” binaries, so called “fat binaries” because they contain the instructions for both ISAs and the build of the operating system for each ISA knows how to select the correct portion of the binary for itself. Microsoft appears to be trying the simpler route of not providing translation for legacy applications to run on ARM. Still, it’s tool-chain going forward will have to provide builds for both ISAs. Presumably with some foresight, the installer could contain both binaries and install only the correct binaries. This would be valuable considering the ARM devices are presumably tablets where space considerations still matter. Regardless of the path forward, developers need to recompile all code to support the migration path forward, including 3rd party or shared libraries.

What about a more controlled platform, like a game console? For example, the original Sony PSP was an MIPS chip, while the PS Vita uses an ARM chip. In this case, the clear line between product generations makes the transition easier. Any code has to be recompiled, just like before. But that is more of an expected result among software makes for consumer devices. As new devices come with new high level APIs and operating system calls, and that is the real adjustment for a programmer making software on such consumer devices. If Sony does choose to support downloadable PSP games on the Vita, it will be on them to provide the compatibility later.

Microarchitecture and Processor Lines

Now that we understand that ISA doesn’t dictate implementation, it’s worth explaining that the actual implementation is called a microarchitecture. Changes in microarchitecture do not change the ISA. So for a counter-example, when the x86 ISA got MMX extensions, those resulted in new instructions. That is not a microarchitecture change, but an ISA change. The chip can execute the new required instructions any way it sees fit, MMX just means it handles those instructions. An example of a microarchitecture change is when Intel’s microarchitecture started using out of order execution of instructions to optimize the efficiency of loading instructions (and reduce bubbles, but that’s a longer topic for another time).

Microarchitecture changes can result in real performance differences. Various clever tricks like pipelining, branch-prediction and more can drastically improve the throughput of a processor without affecting it’s clockspeed. When one cheap leader seems to be in the lead in benchmarks, but the processor numbers (like speed and cache) are the same, it’s usually a sign that said vendor has a better microarchitecture at the moment.

With that in mind, it is much easier to decode the processor lines than it would first seem. Product names change a lot, but the microarchitectures stick around for a while. If you look at that info, you’ll find that the product lines that use the same microarchitecture differ by cost, cache size(s), clock speed, power consumption, transistor density, etc. It helps to look through a list like this of microarchitectures released by a company. Just be aware that some of the codenames are really just smaller versions of earlier microarchitectures. You’ll see a power/heat change in that case, but it’s largely just a manufacturing change.

Summary

So what does all this mean? Hopefully, when you see benchmarks, or discussions about major platform or tooling changes based on chip changes, it will make a bit more sense. And processor shopping should be a little easier if you understand the that once you zoom in a microarchitecture that you prefer, you can slide up and down the cost scale a bit based on clock speed, cache size, etc. Certainly this basic understanding has emphasized to me that clock speed isn’t everything. One only need see the benchmarks of two different microarchitectures to see how big the differences can be. For example, see this comparison of an Intel Core 7 (Nehalem microarchitecture) and AMD Phenom II (K10 microarchitecture). You can see real differences in there.

And finally, as you ready about various hardware configurations you should begin to recognize where certain ISAs fit as the best tool for the job. The pure efficiency and power of IBM’s Power ISA is the reason it still has such a stronghold in super-computing and other big-iron applications. While ARMs low power efficiency and flexibility makes it the clear leader in portable devices. Take the iPad, Apple’s A4 and A5 chips may sound like a new invention, but it is just a new implementation of the ARM ISA with an on chip GPU. Finally, x86’s desktop software library and price / performance balance have kept it king of the desktop computer for a long time running.

Interesting speculation to think about: With Windows 8 and OS X Lion both heading in a tablet friendly direction, and Windows 8 and Apple tablets running ARM, you have to wonder if Apple and Microsoft won’t move away from x86/x64 in order to simplify their developer tools by getting rid of the need to compile to two different ISAs.

Source Control Considerations for ConnectionStrings in .Net

For .Net projects that have multiple developers, configuration differences can be a real problem. For this post, we’ll use the example of connection strings, but it could just as easily be a directory location, or some other difference. Let’s say that we have two developers, one is using SQL Server, while the other is using SQL Server Express. A minor difference in configuration. The first developer needs to list a DataSource of “.” or “localhost”, while the second developer’s DataSource is something like “.\SqlExpress” or “localhost\SqlExpress”.

Note, for the purposes of this discussion, I’ll refer to some git terminology, as it’s my preferred source control tool, but this could just as easily apply to TFS, subversion, mercurial, etc.

Some developers will put the most common version in their web.config, and the developers in the minority have to change their file without checking in, and regularly undo and redo that process in order to get latest if their are other web.config (or app.config) changes.

To avoid this, some teams don’t check in the web.config file. They check in something like web.config.sample which has the default settings, and you make a local copy called web.config that is not registered with source control. You can make any appropriate changes without causing conflicts. Anytime you get latest and see a web.config change, you’ll want to use your favorite merge tool to pull any necessary changes into your web.config. Less work than the previous version, but more manual merging.

I like to recommend a 3rd method, that uses a lesser known piece of web.config capability. Most config sections allow a configSoure attribute to specify an external config file.

So in the case of the connection string differences, the web.config would get checked into source control, and it’s connection string section would like like the following:

<connectionStrings configSource="connection.config"/>

And then the connection.config file is treated like the second method above. There is a connection.config.sample file in source control that shows all the connection strings you need to have with sample connection info. Like follows:

<?xml version="1.0" encoding="utf-8" ?>
<connectionStrings>
    <add name="SomeDB"
         connectionString="Data Source=.;Initial Catalog=SomeDB;Integrated Security=True;"
         providerName="System.Data.SqlClient" />
</connectionStrings>

Additionally, we have a rake task (could be msbuild, maven, .bat, etc) that helps you initialize the project upon first checkout. One of the things it does, is copy that file from connection.config.sample to connection.config. And our .gitignore file on the project, tells git to ignore connection.config. Now you only need to change your DataSource and are ready to go. And the web.config can change without the need to change your connection.config file. The only time that should change is when you add, remove, or rename a connection string.

This same idea can be used for Elmah config, SMTP config, etc. For example, in development, we have email go to file, and different developers use different directories to store the mail. This scheme handles it.

Lastly, this same scheme can largely handle the differences we have with staging and production environments. Since the differences in web.config from the development branch to the stage and production branch are minimal, merging is pretty painless. And those external config files are a one-time setup on the server.

Consider this pattern for your projects, and let me know what you think. Either in the comments below or at @thoolihan. And if you find any room for improvement or have other feedback, please pass it along.

The Sun Also Rises (on the Technology Coast)

One generation passeth away, and another generation cometh: but the earth abideth for ever. The sun also ariseth, and the sun goeth down, and hasteth to his place where he arose. -Ecclesiastes 1:5-6

Apple released Mac OS X in 1999, using a BSD core and embracing many libraries and utilities of the open source world.  This excited open source advocates, and appeared a step in the right direction compared to Microsoft’s strict licensing and Sun’s Java licensing. 

Here we stand today, and the Apple iPhone is one of the most locked-down computers around.  Make no mistake, the iPhone is a computer.  Sun has open-sourced Solaris and Java.  Let’s not overstate the Microsoft moves, but they have made moves to interoperate with open source libraries, and are releasing source of many new products on codeplex.  Windows is still locked down, office docs in strict formats, and the open source stuff is in a whacky license.  But it’s a start.

What will the next 10 years bring?