Tuesday, December 29, 2009

Seeing ‘Avatar’ In 2-D Would Be Like Taking Your Mom to the Prom

I mean, sure, you were there for the Big Event, but you were certainly not getting the full experience.

Avatar, the latest movie (in fact, the first in 12 years?!) from James Cameron  , the Director of the movie that for, I suspect, only a short time longer, holds the world record for highest box office gross in its initial run, Titanic, as well as big money makers like Terminator 2 and Aliens, is a movie that you simply must see in a darkened theater, and, unless you lack two functioning eyes, in 3-D.

Granted, with stunning high definition screens in more and more homes all the time, it is sure to be a show-off-your-home-theater Blu-Ray disc later this year, too, but to miss this one in the theaters would be a real shame.

There are only a handful of movies that have come along throughout the 115 year history of motion pictures that you can point to and honestly say

“That movie broke new ground.” 

Much as I was skeptical from the initial previews, I have to happily admit, Cameron really pulled off something spectacular here.

I doubt Avatar will be widely praised for its writing or acting, but it succeeds on several levels that really do change the game going forward.

First, you have to think about the fact that the film (or perhaps “the digital file” is the more appropriate term, to paraphrase Robert Rodriguez) exists at all.  With a reported production budget of ~$230 million, it is one of a very small number of cinematic projects of that budget magnitude. 

Cameron, the reigning “King of the World” in terms of tremendous commercial clout at the box office, thanks most particularly to the $1.8 billion returned on the $200 million investment in Titanic, is perhaps the only director who could get this project green lighted with a budget that high. 

The only other films to come close to that price tag, Spiderman 3, Pirates of the Caribbean: At World’s End, maybe those last few Star Wars movies, were proven marketing vehicles that were virtually guaranteed to put up huge box office numbers, and therefore easily offset their own budget risk.  

Think about it:
Steven Spielberg? Without Indiana Jones or E.T. in the movie?
George Lucas?  Without Star Wars or Indiana Jones?
Peter Jackson?  Without any hobbits or King Kong?
Sam Raimi?  Without Spiderman?
Michael Bay?  Without Megan Fox, er, uh, Optimus Prime?
Gore Verbiniski?  Without Jack Sparrow?
Robert Zemeckis?  Without Marty McFly and Doc Brown or Forrest Gump?

I doubt that anyone else could have easily pulled together the kind of financing this thing took. 

And, consider, too, that the film required substantial investment and invention of new technology in order to be shot at all (or, perhaps “captured” is the more appropriate term).  Cameron has a long history of being on the cutting edge of filmmaking technology and special effects, and, like Lucas, Spielberg, Jackson and Zemeckis, has a knack for consistently making such massive undertakings broadly appealing and therefore remarkably profitable.

One thing worth noting is that even though there is no pre-existing literary, TV or film property that provides Avatar with a built-in audience, it is a genre picture.  This is significant, because when you look at the movies that have ushered in new eras of technical innovation and sophistication (think of the advent of Dolby Stereo and motion control photography with the first Star Wars movie, or bullet time and wire work in The Matrix, or the rapid, coarse cutting style of the Bourne movies), they are, invariably, genre pictures.  Science Fiction in particular boasts the highest number of innovative effects films, for obvious reasons, and that is certainly the genre that Avatar is most closely aligned with, although it clearly has elements of war movies, disaster flicks, fantasy and westerns woven through it as well.

So there is the question of what Avatar’s success means for the film industry.  In its second weekend, which is when most blockbusters begin their rapid decline in box office grosses, Avatar not only held on to #1 with a very modest decline in box office receipts, but did so even as the new Sherlock Holmes film had the biggest Christmas Day opening of any movie in history, and by a wide margin at that.  These are clear signs that Avatar has legs and that the word of mouth is excellent.  Clearly, it will be among the top grossing films of all time, and therefore, it will be hugely influential.

Here are some key industry outcomes that I can easily predict:

1) Arguably, the most significant impact Avatar will have on the industry at large is that it demonstrates that 3-D is no longer easy to classify as merely a gimmick or novelty, but that it can be effectively used to enhance the story and to actually help the audience suspend disbelief rather than being used to flaunt the movie’s unbelievability literally right in the audience’s face. 

There are only a small handful of shots that stand out as traditional “3-D shots” in terms of the “oooh aaah” factor, but the entire viewing experience is tremendously enhanced throughout because of the depth and richness to which the 3-D treats the viewer’s eye.  As the title of this post notes, seeing the film in 2-D would be the cinematic equivalent of taking your mom to the prom.  Sure, you could, but… Why would you want to? 

Just a few years after they were heralded as new heights of computer generated imagery, Star Wars Episodes I-III, The Lord of the Rings, Harry Potter, and Jurassic Park will quickly begin to feel quaint and technically obsolete.

2) As Weta Digital spearheaded the special effects work on the production, following on their successes with the Lord of the Rings trilogy and King Kong among others, the “little” effects shop Peter Jackson founded to create shots for Heavenly Creatures back in 1993, has now firmly and undeniably  cemented their position as the primary competition to George Lucas’s Industrial Light and Magic, by creating a true masterpiece of special effects directed by a major filmmaker other than Jackson himself. 

ILM will almost certainly react by trying to best Avatar in their next major outing, and it is not hard to imagine George Lucas commissioning a further effects revamp of his core Star Wars films to adapt them to 3-D viewing in order to avoid them being too quickly made to look obsolete, the way older Sci-Fi movies do nowadays.

3) Lastly, as alluded to earlier in this post, with any luck the film industry will recognize that it is okay to be more daring with their investments.  For two decades now, a vastly disproportionate amount of investment has gone into making movies that were retellings of pre-existing works of art, either movies themselves, or books, comics, TV shows, and the sequels thereto.  Avatar’s box office success will hopefully remind the major studios that Hollywood can make a mint by making us crave something we have not seen before.

Don’t get me wrong.  I am not so simple-minded that I don’t see the influences that are sprinkled throughout Avatar.  In Hollywood-speak, you might call it “Dances With Wolves meets Disney’s Pocahontas, crossed with Star Wars: Episode II: Attack of the Clones, and it’s in 3-D!”  or any of a thousand other ridiculous shorthand summaries like that.  But, nevertheless, the Navi people, their language, the botanical life on the planet Pandora, all of these are examples of rich imagination and universe-creation that should inspire interesting work in the next several years from all levels of the entertainment industry.

To sum up: 

Go see it. 

In 3-D. 

Already saw it?  What did you think?

Monday, November 23, 2009

Don’t Let Yourself Be the Bug in Your Software

Have you ever heard the old joke about the guy who was such a bad dancer that they asked him to stop because he kept throwing off the music?  (If you know who told that joke originally, please let me know).

My wife had an interesting experience today in much the same vein.

Our children’s school system uses a web-based software system to schedule quarterly parent/teacher conferences.

For the elementary school, the scheduling is pretty straightforward, because you only have to schedule with one teacher.

For the middle school, however, it’s more complicated, because there are a half dozen or more teachers to potentially meet with for our one student. 

Because of this, conferences are limited to 10 minutes, per the rules indicated in the software system, and you can – in fact, you are encouraged to -- schedule back to back conference slots with different teachers, such that you can get through, say, six conferences in 60 minutes, in theory, with one scheduled right after another.  My wife scheduled her times a couple of weeks ago, and had no trouble with the software.

So far, so good.

Now, given that this is scheduling software, it only stands to reason that the offline elements of the scheduling process should be aligned with the software’s scheduling rules and vice-versa.  Unfortunately, this was apparently not the case.

My wife showed up 10 minutes early to her first scheduled conference.  She was told that she was 10 minutes (a whole session, mind you) early, and asked could she wait until the other parent(s) showed, or didn’t show, so everything could stay on track.  Smart thinking.  Despite my wife being early, the teacher recognized the potential downstream impact of breaking with the “business rules” so to speak, in this one case.

But, for some reason, certainly meaning no harm, the teacher said, just a moment or two later, that my wife might as well sit down and get started with the conference, so my wife obliged, even though only about half of the appointment timeslot remained. 

But, about 6 or 7 minutes into the other parents’  appointment time, the other parents actually did arrive, and the teacher then told my wife that she would have to abruptly end the conference that had only just begun, because she had to keep things on track schedule wise. 

In effect, she, again, meaning no harm, gave the other parents my wife’s slot and shortened the previous slot that my wife had “taken” at the teacher’s own request.  As a result, my wife got only about 2-3 minutes for that first conference, and didn’t have an opportunity to ask questions or discuss.  This is what is referred to as an  unsatisfied customer.

According to my wife, bells were not used to mark the hard cutoff for the 10 minute cycles, so she simply went on to the next room on her schedule, only to find that the teacher in that session was backed up, running about a session behind schedule.

This issue cascaded through several other conference timeslots, and apparently, occurred for others in parallel, until at one point my wife happened to be passing a school administrator in the hall, who was earnestly trying to keep things flowing properly.

The administrator, certainly meaning no harm, asked if my wife was having challenges with the schedule due to the fact that all of the teachers were running behind by 1 or 2 appointment slots by now.  My wife indicated that yes, the schedule was now off track, and the administrator, trying to help of course, said she would call the last teacher on my wife’s schedule and tell her that my wife would be on her way, but that her schedule was thrown off by earlier delays.

It is not clear whether the administrator actually reached the teacher to relay that message, but when my wife showed up, 15 minutes after the originally scheduled time, due only to the delays elsewhere, the teacher seemed completely irritated by the delay, and made it clear that she held my wife responsible for it.  The ensuing conversation was equally frosty.

My wife, to reiterate, was at the school early for her first appointment of the day, and simply tried to go from conference to conference per the schedule she had established with the website software a few weeks earlier.

So, considering my wife’s case alone, the software worked perfectly, but one appointment got cut short and thereby rendered essentially meaningless, all others got delayed, and the final one included some insulting attitude for good measure.

Now, I’m sure, at no point in this process did anyone think they were causing any harm.  And yet, major breakdowns occurred.

You can’t blame the software.  It did what it is meant to do.

You can’t blame the people for being ill-intentioned.  No one meant to cause any trouble.  Okay, maybe a few people could have been more pleasant, and/or followed through better.  But that’s just humans for you.

Regardless, there was definitely a process breakdown.

Looking at it after the fact there were a few key breakdown points that were noteworthy:

1) In my wife’s case, to stick with the process as well as possible, the first teacher should have either

a) not seen my wife until her scheduled appointment time, so she could politely let the late-arriving parents know that that time was booked, thereby hopefully keeping the subsequent appointments on track

b) seen my wife, and when the other parents did arrive, flip-flopped their appointment time with my wife’s so both got their full time and the teacher’s attention

or maybe even c) simply rescheduled the other parents’ appointment for later on in the day, for example

2) The administrator, rather than trying to call individual teachers and put out mini-fires, should have addressed the root problem – in my opinion, without the use of the school’s bell system to clearly indicate that appointment times had ended, parents and teachers apparently went over their allotted time in many, many cases, which had a cascading effect on delays.

3) The final teacher, when told by my wife that her delayed arrival to that last conference was the result of the aforementioned breakdowns, should have been more understanding, but that is really a matter of opinion, frankly.  You can’t count on people to be nice or understanding, I’m afraid.

The reality is, if parents and teachers had been more successfully influenced to stick to their appointment times, aligning the real-world process more closely with the software, then, the delays might not have occurred, and thus, the last teacher, despite her attitude, would have had little to complain about with respect to my wife’s arrival time at her conference.  And, the user, my wife, would have had a much more pleasant experience, and she’d be singing the praises of the school’s efficient method of handling conferences.

The moral of the story?  Don’t invest in software to simplify a relatively complex process, only to then be the bug in your own software by neglecting the real-world aspects of the process the software is supposed to help you streamline! 

Oh, and whatever you do, don’t piss off the end user.  Especially when she’s my wife.

This post originally appeared at http://www.becraftsblog.com

Reprints with attribution are welcome, feedback welcome.

Friday, November 20, 2009

Jeff Becraft Interviewed by Microsoft Architect Evangelist Kirk Evans on Microsoft’s Channel 9!

I want to thank Kirk Evans, Architect Evangelist for Microsoft, who recently interviewed me about AT&T’s managed SharePoint hosting services.

P1360841Kirk is a very knowledgeable guy with great connections to people doing fascinating things with the technology, so if you’re working in the Microsoft space, you should definitely follow him on Twitter @kaevans, and watch for more of his “Water Cooler” shows on Channel 9.

Tuesday, November 10, 2009

Benjamin Franklin’s Thoughts On Cloud Computing

I’ve recently been reading the Autobiography of Benjamin Franklin, and have been marveling at the timelessness and broad applicability of many of Mr. Franklin’s nuggets of wisdom.

A surprising number of them I find quite applicable to Cloud computing, the application service provider model, Software as a Service, or whatever you wish to call your particular flavor and brand of server management handled off-premises by an external vendor.

As I too infrequently do, before writing this post, I actually Googled my topic “Benjamin Franklin cloud computing” just to make sure no one had already identified this connection, and sure enough someone had.  But, hopefully, this post is entertaining and insightful nonetheless.

"I was not discouraged by the seeming
magnitude of the undertaking, as I have
always thought that one man of tolerable
abilities may work great changes and
accomplish great affairs among mankind if he
first forms a good plan, and, cutting off all
amusements and other employments that would
divert his attention, make the execution of that
plan his sole study and business."

What Mr. Franklin is telling us here is that focus and experience are key factors when considering a cloud computing vendor or application service provider.  Do you want to go with a company that has decided recently to offer their services in a hastily constructed cloud computing model, or someone who has made providing services in that model “their sole study and business” for years?

"Partnerships often finish in quarrels, but I
was happy in this, that mine were all carried
on and ended amicably, owing I think a good
deal to a precaution of having very explicitly
settled in our articles everything to be done by
or expected from each partner, so that there
was nothing to dispute, which precaution I
would therefore recommend to all who enter
into partnerships."

Here Ben is pointing out that when you put your data and solutions in the cloud, you are deliberately handing off responsibilities to the vendor.  You and that vendor owe it to yourselves to make sure that you have carefully ironed out the details regarding who is accountable for what, and where the line between included services and extra costs really is.

"In the first place, I advise you to apply to
those who you know will give something; next,
to those who you are uncertain will give
anything or not, and show them the list of
those who have given. And lastly, do not
neglect those who you are sure will give
nothing, for in some of them you may be

Even in the mid-1700s, it was clear to the Founding Father that cloud computing providers can’t effectively spring into action and achieve success throughout the market overnight.  It takes years of disciplined building and refactoring of the entire operating environment, customer service organization, and billing systems to get it all just right. 

Early adopters will naturally light the way for those who are more risk averse, and only after those two groups are satisfied with the results will the masses be swayed.  If you are choosing a cloud computing provider, it behooves you to learn how long they’ve been offering services in that model.

"Human felicity is produced not so much by
great pieces of good fortune that seldom happen,
as by little advantages that occur every day."

Ben is telling us that the key to being happy when handing over the management of your data, your security, your applications, your entire solution, to your vendor is feeling as if you are getting consistent, high value every day. 

Sure, it’s great to see your cloud computing provider swing into action and save the day when there is that rare crisis, but don’t overlook the confidence you develop day by day as the system remains stable, the patching gets done quietly and uneventfully, and the functionality and/or performance are enhanced on an ongoing basis. 

Also consider how well the provider has adapted their customer service people’s work style to suit your company’s needs, as opposed to the other way around.  What greater advantage can a provider offer you than their ability to seamlessly integrate with your own team so you can get mutually ensure that each other’s work on your behalf gets done quickly and effectively.  

"How few know their own good, or knowing
it, pursue. Those who govern, having much
business on their hands, do not generally like
to take the trouble of considering and carrying
into execution, new projects. The best public
measures are therefore seldom adopted from
previous wisdom, but forced by the occasion."

Leave it to Ben to point out that cloud computing offers a tremendous advantage over traditional delivery models in that you can get to work quickly without having to first procure your own hardware, networking, data center racks, etc.  The easier your provider can make it for you to get your application up and running quickly, the easier it will be for you to get approval for that opportunistic project your team just identified.  Cloud computing becomes a valuable addition to your toolbox in this sense, because it gives you an option where previously, all you had were processes and delays.

"When men are employed they are best
contented... [As with] the sea captain whose
rule it was to keep his men constantly at work,
and when his mate once told him that they
had done everything, and there was nothing
further to employ them about, 'Oh?' says
he. 'Make them scour the anchor.'"

Let’s face it.  There is always more to do.  You are never going to have plenty of time to get it all done.  Why NOT take a big chunk of the stuff you have to do but which does not materially push your business close to its goals, and push that off to your cloud computing provider?  That way, you can focus on what is going to make your business more successful, and let your provider worry about making the process of managing your system more efficient.

At the same time, your provider is going to be able to work night and day on making the process of carrying out those tasks in the most optimal, efficient, repeatable way possible, in order to maximize their margins and minimize the effort involved.  You can only benefit from this constant and never ending cycle of improvement and refinement.

"Such extreme nicety as I exacted of myself
might be a kind of foppery in morals which, if
it were known, would make me ridiculous --
that a perfect character might be attended with
the inconvenience of being envied and hated,
and that a benevolent man should allow a few
faults in himself to keep his friends in

The irony of the cloud computing model is that when it’s working, it doesn’t draw attention to itself so no one thinks about how well the provider is executing on a good day,DSC_0011RedBarnFarLeft_100dpi but when it’s not working, oh boy, does it draw attention to itself, and all manner of eloquence is spoken about the provider, as if systems never hiccupped when the solution was in-house. 

Alas, while you will hit that 99.9% or higher SLA on average, when that occasional fluky thing happens that takes the system offline for a blip, and your provider not only fixes the problem promptly, but calls you to inform you of what happened before you even receive any complaints from users, take that responsiveness as a blessing, rather than overly focusing on the fact that the issue occurred in the first place.  No system is perfect, but if you and your provider have planned ahead properly, these flukes will be a relatively painless reminder of how well the relationship is going both day to day and in times of crisis.

Wednesday, November 4, 2009

Web Part Page Bombing? Enter Web Part Maintenance Mode To Fix It

Quick tip to SharePoint customers and end users who are working with a web part page that is bombing out before they can do anything to it.


Append the above to the end of the URL of the web part page (you can use the right click on the hyperlink, Copy Shortcut method to get the URL into your clipboard without going to the page and repeating the bomb out), and load the page again.

You will enter web part maintenance mode, giving you a chance to remove the offending web part without having to kill the whole page and create it all over again from scratch.

Unfortunately, this doesn’t give you a magic bullet for resolving the underlying problem, but it does at least allow you to preserve the page as it was prior to adding the offending web part.

Thursday, October 29, 2009

Why don’t I see an option to delete my Site Content Type?

If you have created a Content Type in your Site Content Type Gallery and now you want to Delete it, but SharePoint is not presenting you with the Delete option when you click on the Content Type’s name in your Site Content Type Gallery, don’t worry.  You probably just set your Content Type to be “read only.”  AlamoFrontTopStraightOn_100dpi

If so, trust me, this is not a tough spot to get out of.

Once you change that Read-Only setting, the Delete option will appear for you again, and deletion is simple as long as the Content Type has not been instantiated anywhere.  (If you’ve used the Content Type already to create documents, check out Tyler Holmes’ blog post for some advice on how to take care of it)

To change the setting, do the following:

  1. Go to the Site at which the Site Gallery exists.   You must have sufficient privileges to administer the site.
  2. Click Site Actions.
  3. Click Site Settings.
  4. Under Galleries, click Site Content Types
  5. Find your Content Type in the list, and click on its name.
  6. Note that the page header reads:

    Site Content Type Advanced Settings: Delete This Content Type (Read Only)

  7. Click the “Advanced Settings” link.
  8. In the field labeled, “Should this content type be read only?” change the selection to “No”.
  9. A pop-up may appear, indicating “This type has been marked as read only.  If you choose to modify this type, existing solutions such as client property editors or custom workflow solutions may stop working.  Are you sure you want to mark this type as modifiable?” 

    At this point, you may want to take a moment to consider whether this consequence is really a concern, or whether you just created your Content Type when or where you shouldn’t have and just want to wipe it out before anyone uses it. 

    I’m assuming the latter is true in this case.
  10. Click “OK” to get past the pop up.
  11. Presumably, you are not worried about the remaining question “Update all content types inheriting from this type?” since you already clicked “OK” above.  If you’re concerned, check Tyler’s blog (see above).
  12. Click “OK” on the page.
  13. When the page refreshes, you now have, under Settings, an option to “Delete this site content type" Click it.
  14. A pop will appear to confirm that you have stopped and thought about the consequences of your actions.  It reads, “Are you sure you want to delete this site content type?” Click “OK” if your mind is free of fear and paranoia.
  15. When the Site Content Type Gallery reappears, your Content Type is gone.

See?  Easy.

Sunday, October 25, 2009

Leaving Las Vegas… Day 4 of SharePoint Conference 2009

“You meet people who forget you. You forget people you meet. But sometimes you meet those people you can't forget. Those are your 'friends.'” - Anonymous

Day 4 was October 22, the last day of the conference for most people, me included, and, by no mere coincidence I’m sure, the launch date for Windows 7, the newest version of the desktop operating system.

Clued in by a tweet, I flipped on the TV in my room at the Luxor to see Steve Ballmer, who on Monday had been the keynote speaker at the conference, in New York with Matt Lauer on the Today show, demo’ing some amazing new devices running Windows 7. I’m not running Windows 7 yet, but I’m due for a new computer soon… hmmm… :-)

Before shutting down my laptop one last time, out of curiosity, I checked my blog statistics, and discovered that (thanks to people like you!) traffic had increased more than 600% during the conference, and I had added over 100 new Twitter followers. I updated my personal Facebook status about this, noting how grateful I was that so many people had begun following my posts.

Then, I checked out at the Luxor, left my suitcase with the front desk, and headed to the convention center area of Mandalay Bay, intending to go to the partner sessions that were being held in parallel to the standard breakout sessions that day.

That morning, AT&T’s @BizSolutions feed tweeted that anyone at the conference interested in hearing about our Enterprise Hosting services for SharePoint could tweet me and find me at the show.

Ironically, the first person who contacted me was @sunnyjx, who works on the team that supports an internal deployment of SharePoint for AT&T’s international teams. I had never met him, but we connected on Twitter by way of the MySPC Community area. We sat down and had a fantastic conversation about our respective interests in SharePoint and our roles in the company. Before parting, we agreed to try to meet up with other AT&T’ers who were also at the show, if time permitted, later in the day.

I had a little time to kill before the next session would start, so I clicked around on my Blackberry, and discovered that a bunch of people had ReTweeted the notes I had posted live during sessions the previous couple of days, which for Twitter people is validation that the posts were well-received and found to be of value.

I also saw that a Facebook friend I had gone to high school with had seen my Facebook status update about being at the conference from earlier in the day and mentioned that he had a good friend that was at the show, too. I said I’d like to meet him, and within minutes, @alghourabi had tweeted me about meeting. Ironically, I was already following his Twitter feed, but had not met him before. Next thing I knew, he and I were introducing ourselves by the registration desk. Gotta love the power of Social Media. I can’t imagine this conference being what it was without Twitter, especially.

After these great conversations, I headed down to the partner sessions I had originally intended to go to this morning. I checked with some of the organizers and they said the slides from the partner sessions I had missed would be posted to MySPC, which was good to hear, because I definitely do want to see that content.

I stayed for the last of the sessions, which was about how partners could get involved in selling solutions built with the FAST Search capabilities that Microsoft acquired last year and that are part of SharePoint Server 2010.

Key points I took from this session were:

  • The paradigm the Enterprise Search team operates under is one that says Search experiences should be Visual, Conversational and Actionable.
    • Visual – Search doesn’t need to be merely text. The more immersive the experience, the better, and that can include images, graphics, Silverlight, et al to make the visual component of the search process more engaging.
    • Conversational – Search doesn’t need to be a single query followed by a result. It should be more like a conversation, meaning you ask something you get a response, you ask a more refined question, you get a more specific answer, etc.
    • Actionable – Search should not be separated from the action you take with the results returned. Thus, the results should be provided in a form that permits the user to take logical action. The classic example would be having a product returned in search results that you can easily add to your shopping cart, but obviously, the term “Actionable” can apply much more broadly than that.
  • The reason you go to certain sites EVERY day is because the site adapts as you use it. Making sites built on the Microsoft platform more easily adaptable in these ways is a key priority, and FAST technology is a big piece of that puzzle.
  • The richest sites on the web today bring in social content to make promotions feel seamless, not like ads, but like something a friend might tell you you should be looking at.
  • Partners should focus their efforts on SharePoint itself, on FAST Search for SharePoint and on FAST Search for Internet.
  • Learning FAST Search is challenging, and partners should be prepared for that going in.
  • Search partners should contact searchptnr@microsoft.com for more info

@ThunderLizard and I met up, took a few minutes to say hello to some familiar faces from Twitter, including @sharepointcomic Dan Lewis and @gvaro. (@ThunderLizard might have inadvertently spilled a glass of water on @sharepointcomic, but I think that’s just a rumor. )

After that, as planned earlier, @ThunderLizard and I spent a few brief moments with @sunnyjx and a few other AT&T people we’d never met before who are involved in internal SharePoint deployments in various ways, and then we got our bags and headed to the taxi stand for a ride to the airport, where I looked forward to reading and/or sleeping the whole way home.

The SharePoint Conference experience was over, or so I thought. On the way to the gate, however, I got a chance to say a quick hello to @joeloleson, whose session on the first day was a highlight of the conference for me, and whom I had corresponded with only on Twitter, LinkedIn, TripIt and Facebook (maybe “only” isn’t the right word), but had never met in person.

After a quick bite to eat, @ThunderLizard and I wished each other safe travels and I boarded my Southwest flight back home to BWI. I had waited too long to check in online the previous day, so I had a boarding priority of C (which, on a full flight, pretty much means, no window, no aisle). So, I grabbed a middle seat near the front, and sure enough, the two guys next to me had gone to the show as well. As had just about everyone I met all week, they turned out to be very friendly.

We ended up having some terrific conversations about what each of us had seen at the show, and what we liked most, how it applied to our distinct business situations, etc. It was such good conversation that I never actually went to sleep on the flight home.

One of them told me a fascinating rumor, too, that apparently comes from a well-placed source. On Tuesday, the beach party had featured Huey Lewis & the News. The rumor (which I have gone to zero effort to confirm) goes that Microsoft had initially approached U2 about doing the gig (This isn’t completely outside the realm of possibility -- U2 was already scheduled to play in Las Vegas Friday night) and U2 had agreed to play for no pay (!) on one condition – that Microsoft make a sizeable, anonymous donation to the charity of U2’s choice. This rumor had it that the amount was $1.5 million. Presumably, because the donation would have been anonymous, or maybe because it would look too extravagant right now?, it didn’t actually happen. Now, this story may or may not be true, and may or may not be accurate even if part of it is true, but I had already heard that Roger Daltrey AND Aerosmith, among several others, had played the Oracle Open World conference the previous week, so U2 playing the SharePoint conference is plausible enough to make a good story regardless.

And there you have it. The last of my SharePoint Conference daily summaries. I’ll be writing some blog posts that will expand on some SharePoint topics of interest to myself and my customers over the next several weeks, and posting updates on Twitter regularly as well. Please keep in touch.

Take care,

Friday, October 23, 2009

A Fairy, A Samurai and a Cowboy Walk Into A Casino… Day 3 of SharePoint Conference 2009


My first session on Day 3 was the most mind-blowing one yet, I think.

The Speaker was Doron Bar-Caspi, a Sr. Program Manager with the SharePoint Customer Advisory Team (CAT).  The topic was best practices for geographically distributed SharePoint 2010 solutions, something that, working for a provider of hosted SharePoint, I find a very relevant topic, and one that was very challenging to address in SharePoint 2007. 

To start off, Doron provided some useful data points about latency expectations from different points around the globe.


I learned a lot of new things in this session.  It was a lot to take in.  I highly recommend watching Doron’s session on MySPC if you have access, so you can get even more detail than I have attempted to provide below, but here goes….


A new protocol (to me anyway) is FSSHTTP, which stands for File Synchronization via SOAP over HTTP (HTTP of course, still stands for hypertext transfer protocol).   This is a technique by which network traffic is reduced by the use of Office 2010 apps and SharePoint 2010, because the tools are able to just send the diffs back to the server when documents change, rather than the whole document, which would involve latency that is particularly painful for users in geographically distributed scenarios.

Visual Roundtrip Analyzer (VRA)

Parts of this demo, in which Doron demonstrated network performance and characteristics were performed using a utility called “Visual Roundtrip Analyzer” which can be downloaded from Microsoft.  A network emulator that was used will be part of Visual Studio Team System 2010.

Office Document Cache (ODC)

Something else new (to me) is the Office Document Cache, which sits on the client side and coordinates synchronizations back to the SharePoint Server, much the way your Outbox contents are asynchronously sent back to Exchange.  This is a very compelling feature of Office 2010 and SharePoint 2010, because it means that the client application can commit updates back to the server in an asynchronous manner, which means the user can trust the Office Document Cache to eventually sync with SharePoint, but in the meantime the user can continue to work in the client application without any tangible delay to wait for the file save to complete.  This is especially important on large files like those 100,000,000 row Excel files that are going to be possible in Office 2010.

The Office Document Cache works even if the user has taken the document offline and saved it back.  The updated copy is queued up, and then when connectivity is restored, the file is uploaded back to the server, but with the FSSHTTP feature that ensures that only the diffs actually go over the wire back to the server, again, all in the name of reducing latency over the wire.

As I said in a tweet upon learning about this feature today, I believe The Office Document Cache feature will tend to compel global enterprises to adopt Office 2010 as soon as possible.

But what about conflicts?  If the user can take the file offline, couldn’t someone else start editing it in parallel?  The answer is yes, they could, but the good news is that Microsoft has built multi-master conflict resolution of changes into the product.  I need to get a little more clarity on the exact rules here, but I understand at this point that if two copies of a file work their way back to the server, and one of them has an edit to paragraph A and the other has edits only in paragraph C, that both changes would be merged together to form a new version of the file in SharePoint.  That is intuitive and something I think end users will be able to understand.

Speaking of offline access to documents, Microsoft has rebranded the product formerly known as Groove to SharePoint Workspace.  Both the Office Document Cache and SharePoint Workspace synchronize with SharePoint upon reconnect.

But, wait, there’s more! 

Office Web Applications

The major Office applications, namely, Word, PowerPoint, Excel and One Note, come with browser-based clients, so you can now open your files in a browser-based version, which will provide a faster download than the full client application, another bonus for remote workers.

Mobile Device Performance Support

As I sat there furiously tweeting away on my Blackberry, Doron began to address the concerns of the increasing need to provide excellent support for mobile devices.  Is the mobile experience going to be equivalent to the PC experience?  Well, no, although it is easy to predict that we’ll see some fancy mobile apps coming out that support SharePoint in ways that build on top of the fundamental mobile support the platform is going to provide out of the box.

In geographically distributed models, you have varying network capabilities and performance of course, and it is predictable that that will continue to be a challenge indefinitely.  So, what Microsoft has done is build in options that make it possible to transmit the minimum amount of information over the wireless network, thereby keeping the annoying wait times as your mobile device pulls down content from SharePoint to a minimum.

So let’s say you are a user with a mobile device and you want to look through a document library that you know about, but you have to identify which file in that library is the right file.  In the pre-2010 world, out of the box, you’d have to download those files one by one, spinning up whatever application required to view the document, if you could in fact view the document at all.

In 2010 world, you can look through a plain text-based list that requires minimal data to cross the network to you.  From there, you have basically three options for what to do with the documents in that list.  You can open that, say, Word file, in a plain text version, which shrinks the size considerably, making it pass more quickly over the network.  This is great, but if you wanted to see the formatting (tables, fonts, images, etc.) in the document, plain text obviously can’t do that.  So, you can opt to pull down an image of the page in .jpg format, which is a compressed image file format of course, and that means a very small file traveling over the network to you. 

So, now you’ve been able to see the file well enough to confirm that it is indeed the right one, and that all happened quickly and with ease.  Now, you can “download a copy” of the file you have identified, and the full file comes across to you.  By giving you options to try before the full file download, you can look around for what you need much much faster.  I like this concept, and I can’t wait to see what phone app developers will do to make the mobile experience even richer and faster.

So now that you’ve got your geographically distributed solution serving your end uses all over the globe, and you’ve got it working efficiently in the branch offices and on a variety of devices, the question is how do you share services across all of that infrastructure?  What Microsoft has done to address this challenge is to break out Shared Service Providers into a new paradigm, in which these types of services are delivered via a mechanism called Service Applications. 

Service Applications

Service Applications, unlike Shared Service Providers, can be shared across farms.  This is a tremendous advantage because now you can use, for example, the Profile Store, or Search, across all of the farms in your solution.  If you’re wondering, did Jeff just say “All of the farms in my solution?!”   Yes, many larger organizations have multiple farms running, either with some sort of log shipping for DR or you have some form of 3rd party replication going on, or simply to have full farm capabilities located as close as possible to the target users in the right nodes on the WAN.

So what you can do in a geographically distributed model in SharePoint 2010 is deploy several farms, and distribute them around the globe as relevant, say one farm in North America, one in the EU and one in Asia-Pac.  But, you can optionally have a single service application supporting all of them.  You simply (okay, maybe simply is an overstatement of the relative ease of doing this, but it is doable) need to establish the service application in one “master” farm, and then you can “publish” that service to the other farms.

So now, you can do things with the various service applications, such as enforcing a taxonomy or other standards globally across multiple farms.  Also, associating web applications with services is more flexible now, and you can get better isolation, load balancing, etc.  Powerful stuff!

But wait, there’s more!

Uninterrupted Log Shipping

A common replication model for SharePoint DR and/or geographically distributed farms is SQL log shipping.  The problem has always been getting the log shipping to work with the DR or target database online and connected to the app while users are accessing it, because while the logs are being processed, the target database cannot be read.  In global deployments, you really don’t ever want the system to be offline, because users are accessing it 24/7. 

In SharePoint 2010, there is a new concept called Uninterrupted Log Shipping that basically enables you to actively work with two different databases in a single read-only farm.  What you can then do with PowerShell cmdlets is set up the read-only farm to process logs on one of the two read-only databases, while the SharePoint application is working with the other read-only database. 

Then, once the first database is done processing the logs, you switch the read-only farm over to it for read-only use by SharePoint, and the other database begins processing the logs from the master.  And the process repeats endlessly.  This technique enables you to avoid multi-minute outages at the read-only farm while logs are being processed, and the configuration is being altered to deal with the change in database connection.

As a result of the above, the logs can be updated continuously, and the user experience is not disrupted by the process.  The penalty of course is that the read-only farm will need twice the storage that the master farm has, because it is storing two full copies of the database at once.

Because the read/write farm might have sites being added and deleted while log shipping is going along under the hood, and the Search Service Application, for example, would be crawling content in parallel to the log shipping process as distinct service app instances, one running in each farm, your site map on the read-only farm is going to get out of sync with the changes coming in through the SQL log shipping eventually, and search could potentially return out of date, broken links.  So, whenever you are using the uninterrupted log shipping capability, as soon as the log processing has finished, you need to run a PowerShell cmdlet called


and what this will do is make sure that the farm recognizes any changes that have occurred to the site structure.

Whew… That’s all, right?  Nope.  There’s more!

Windows 7 BranchCache

Let’s say you don’t want to have multiple farms.  You want your SharePoint content to all be centralized.  This is a common preference.

BranchCache - Distributed Cache Mode

It is typical when there is a centralized farm and remote users in a branch office, that the branch users connect to a VPN and then communicate back to the central server of a common WAN link, which therefore sees high utilization, making the SharePoint app feel slow to respond.

To help minimize traffic over that common link, Distributed Cache Mode, means that a document can be downloaded to one user in the branch office, which is a normal transfer of a file over the network.  Then, when a second user in the branch office requests that same file, the SharePoint 2010 server (running on Windows 2008 R2) knows that the the first user has already downloaded that file and is in the same branch, so it tells the second user’s machine to download that file from the first user’s machine, in a peer to peer paradigm.  This helps to ensure that the network traffic back to the central SharePoint is limited to the messaging about the request and the response that there is a copy already existing in the BranchCache.

BranchCache – Hosted Cache Mode

An alternative approach called Hosted Cache Mode works essentially the same way but without the Peer to Peer element.  In this scenario, a dedicated server is deployed to the branch office, and it maintains the BranchCache and the connectivity back to the central SharePoint farm.  When users in the branch make requests they go directly to the Central SharePoint Server, and the central server responds to the request with a unique id for the file requested.  The user’s machine then checks to see if that id is available in the cache, and if not, the file is first requested from the central server, and then while the user is using it, in the background, the user’s machine places a copy of that file into the cache server within the branch.

Then, when a second user in the branch makes a request for the same document, the central SharePoint server sends back the same id, and the second user’s machine recognizes that that id already exists in the cache, and thus, the file is served to the second user from the branch cache. 

You might ask why this involves a request to the central server at all if the file requested is already in the branchcache.  Why not simply start by querying the branchcache and if file not found there, only then go to the central server?  One reason is that if you did that, you would not have usage statistics on the files requested.  In this arrangement, the SharePoint server keeps seamless track of file requests, and logs all of that for reporting and analysis, even though the actual requested file is served to the end user from a machine located in the branch.

Wow. Wow. Wow.

As I left that session, I was pleased to see that some pastry and coffee were available in the halls, which for some reason was a rare treat at this conference.  Based on past Tech Eds and other conferences, I expected there to be plenty of refreshments in the major hallways during every break each day, but that was simply not the case at this show.  You generally had to walk all the way back to the exhibit hall to get even coffee, and good luck finding a free soda anywhere.  I know that Nintex provided very nice reusable water bottles, but when you need caffeine or some calories, that doesn’t quite fit the bill.  In future conferences, I would hope they go a little bit further with snacks and beverages.

Anyway, next session for me was Capacity and Performance Planning for SharePoint 2010, presented by Zohar Raz, Senior Program Manager, and Kfir Ami-ad, Senior Test Lead, from Microsoft.


Zohar made it clear that Microsoft has been listening to the performance challenges people have been experiencing in SharePoint 2007, and they have put some new controls into SharePoint 2010 that will help with performance.  Performance reliability at scale is a “big bet” Microsoft made in SharePoint 2010.

To complicate matters, SharePoint Server 2010 is doing “more, more and more.” 


There are three times as many services that can be enabled in the out of the box product, and each tier has more to do than it did before.

Some of the cool points in this sessions included the following:

  • Large scale solutions definitely require some strategic planning upfront.  These are situations where a consultant is advisable.
  • Since you can now have multiple farms with service applications published across all of them, it is now possible to isolate Search, for example, to its own farm, and federate it across all the other farms.
  • In scenarios where SharePoint is being used over the WAN, end users need to be on the latest browsers for the best performance, because IE8 can have 8 simultaneous connections, for example, distributing the burden of downloads and permitting the user to keep working.
  • Because the office client applications save back to SharePoint asynchronously in Office 2010/SharePoint 2010 scenarios, the client application remains functional during the save process, so the user does not feel the performance degrade.  Perception is huge, and this really helps the end user feel like things are moving along fast.
  • As far as large lists and such enhancements in both SharePoint 2010 and the client apps in Office 2010, Zohar cautioned against hitting all of those upper limits at once.  While tremendous scale is possible, you can still see problems depending on combinations of factors.
  • 100 GB is still a good rule of thumb for the largest size you want your content database to reach before you subdivide it.  This is especially true in situations involving heavy read/write, such as team collaboration sites.
  • Throttling is a key new feature that helps to maintain optimal performance.  Large list throttling is offered, as is the ability to throttle excessive client load.  Without throttling on, even just trying to delete a very large list can bring the SharePoint solution to a crawl for other users, especially in prime usage times.  But in SharePoint 2010, the IT administrator is in charge of such latency spikes, and can set windows of time ("happy hours”) during which large list functions are permissible, and during other times, can have SharePoint avoid giving too much resource to large list operations, so the vast majority of your users do not feel slowness because of what one user is doing.
  • Capacity management is a recurring cycle, not a task.  It requires that you scale and adapt to changing needs easily.  P1360847 This is one reason that virtualization technology can be your friend, as it allows you a great deal of flexibility to scale and adapt to changing performance conditions and cyclical demand periods, such as holiday shopping season, open enrollment periods, etc.
  • The fact that there is an extensible framework for the logging database is a big win for IT, because now you can write your own queries and more easily perform very specific forensic queries to figure out what is happening.
  • Microsoft has devised some standard architectures of different scales.
  • When you scale as high as the large farm architectureP1360850 , you are allocating servers to specific purposes within each tier.
  • System Center Capacity Planner is gone in SharePoint 2010.  A replacement has yet to be announced.
  • An attendee asked what to do if you are already over 100 GB in your content database.  The answer was you should first move to SharePoint 2010, and then deal with carving up your large site collections, because carving site collections up is much much easier in SharePoint 2010.
  • In SharePoint 2010, you can gradually delete a large site collection, meaning that the intense database hit that is required to achieve this does not happen in one large atomic action that would kill performance on the system until completed.P1360853

For lunch, I went to the HP session in Palm D.  HP described some very nice capabilities and tools they can provide to customers running SharePoint whether hosting with HP or on-prem, or through another hosting provider.  HP is a competitor in this space, and it was nice to hear how they present their offerings.  They are an impressive provider.

After that, a colleague in sales at work called me to let me know she had discovered that a customer we are speaking to about hosting SharePoint was actually attending the conference as well, and she had suggested that we meet.  So, I skipped the third session and had a terrific talk with the customer, who is thinking about a number of smart ways to move forward in a challenging project to rearrange his company’s IT portfolio and simplify maintenance and management of several solutions, including SharePoint.  Hosting of several applications, including SharePoint, is key to his strategy.  It was fun to do some real work and brainstorm collaboratively with a real customer about how some of what we’re learning at the conference can be applied to an actual project.

After that, I went into the Exhibit Hall and spent some time talking to some key vendors who I want to develop closer relationships with.  I talked also with the Microsoft Online Services team about how their offerings differ from ours. 

As presented at the show, I felt Microsoft made MOS sound a bit like the only option for IT shops looking to have their SharePoint hosted and managed, but as we discussed, there are a number of ways that MOS is simply not going to have an interest in providing services (including obvious ones such as when a customer wants to host all or a significant portion of their IT portfolio which includes Oracle apps or databases or PeopleSoft applications in their IT portfolio, and other cases such as where we already manage the customer’s entire network, and could provide a direct connection right into our data centers), coupled with the fact that we provide consulting services for SharePoint in-house as opposed to requiring a customer to contract with multiple vendors to get the whole project done.  So while I foresee steady growth for MOS with their standard model in particular, I’m sure that other hosting companies are likewise seeing specific areas where Microsoft Online Services will simply not provide a comprehensive solution for many enterprise customers, so I anticipate a thriving competition among several of the top hosting companies to provide great services in this area.

At this point, you’re probably wondering what the heck the title of this post is all about…  It’s no joke, there really was a SharePoint Fairy (she even merited her own hash tag on Twitter (Search #spfairy).  P1360873

It turns out, if memory serves, that she is in marketing at a firm in Atlanta called Unbounded Solutions, or for short, “USI,” which is funny, because the company I worked for before we were acquired was USinternetworking or “USi” for short.

Ironically, #spfairy was being interviewed, I believe, by the SharePoint Samurai himself, Mike @gannotti, with whom @ThunderLizard and I posed for this photo, which is suitable for framing if you’d like a copy.P1360874

Later, @ThunderLizard and I stopped by the Commerce Server 2009 kiosk on the exhibit floor and learned some specific details about how Commerce Services for SharePoint actually works right now in MOSS 2007.  We have clients who are looking for solutions that tie the SharePoint content management publishing model to e-Commerce sites, so this is an area of interest.  I am not clear just yet on how the Commerce Services platform will be adapted to fit the new SharePoint 2010 model, but at least the Commerce Services are pretty much all web parts as it is, so compatibility issues would probably be the main thing, and I expect Microsoft to address that issue with service packs, etc.

I learned from a post by a user on Twitter that the Hands On Labs for developers can be downloaded now from Microsoft’s website.

We watched a little bit of the Rock Band competition (“SharePoint Idol”), and grabbed a bite to eat and then headed over to the Ask The Experts area, where I spoke briefly to two of my favorite presenters at the show, Doron Bar-Caspi and Zach Rosenfield

We ran into Kirk Evans again there.  Kirk is the Microsoft Architect Evangelist who talked to me about shooting a video interview about my company’s fully managed hosting offering for SharePoint for his Channel 9 show “The Water Cooler.”  We spontaneously shot the video in about 10 minutes.  It’s working it’s way through the necessary approval channels, and should be posted soon.  I’ll post a note when it’s up, so you can take a look if you’re interested.  For now, check out this interview Kirk did at the conference with Tom Rizzo, Senior Director, SharePoint.

After that, the three of us went out to the Hofbrahaus for a great meal, drinks and a great sing-along band.

When we got back, we hung out with a few other attendees, including Eric, the “SharePoint Cowboy,” a very friendly and funny guy who I’ve been following on Twitter for months, but hadn’t met yet, at the House of Blues before I finally called it a night personally. 


Hard to believe, but already we were down to just another half day to go…

Wednesday, October 21, 2009

Huey, GUI and Sushi: Day 2 of SharePoint Conference 2009

I stayed up pretty late to get my previous blog post in while all the info was still fresh in my mind (okay, you got me… before it scrolled off of Twitter), so I purposely slept in a little late.

I also had to clear up some stuff with some colleagues back home, so I hung out in the room a little long, and missed the free breakfast and first session. Coincidentally, @sharepointkevin also got a late start, so I invited him to join me at the House Of Blues for breakfast, which he did! Great guy and a nice chat. Kevin, who I only recently discovered works out in Kansas City for a good customer of my company’s, told me he’s a veteran of 87 SharePoint implementations going back to Tahoe days. Very impressive.

After breakfast, I caught the second session of the day, which for me was the Overview of Microsoft Online Services for SharePoint. (Full Disclosure: My company offers fully managed hosting for SharePoint and other applications that competes in a number of respects with MOS)

I found the session interesting in that MOS clearly defines the problem space the same way I do, but I am fascinated by the way they’ve gone about it. They appear to have quite a few features that they don’t presently offer in this managed hosting or “cloud” model. I honestly am not sure why in most cases, but they do indicate they are working toward feature parity with on-premises deployments eventually. In the meantime, it seems like with Microsoft’s offering, you are forced to give up a few key features to go with their offering.


I was a little surprised that MOS Dedicated (which is the offering that comes closest to matching my company’s offerings, though theirs is strictly Microsoft apps) is strictly for orgs with 5,000+ seats. Less than that, and they want you to go with MOS in the multi-tenant model.

They do seem to have some nice automatic deployment features in the multi-tenant offering. The biggest paradigm shift I heard was this concept of putting parts of SharePoint into the cloud solution, and keeping parts on-prem. That’s something that didn’t seem very practical in SharePoint 2007, but in 2010 makes a lot of sense.

The most interesting other tidbits from the session were that

  • Microsoft Online Services has over 1 million seats live, after 4.5 years (if I heard correctly) in business. It wasn’t clear though whether that was all SharePoint or some combination of Exchange, OCS and SharePoint or something like that
  • Microsoft Online Services standard (multi-tenant) does support the creation of custom workflows, but won’t support custom workflow activities.

I had made an appointment to speak with a vendor this morning, so I did that quick on the exhibit hall floor, and then moved on to the lunch session that Colligo sponsored. The food was pretty good, but the content and room suffered I think from some challenging circumstances. I like Colligo’s product, but I felt the presentation was not as effective as it could have been. A gentleman from Quest Software who was there (Quest is a Colligo customer) made the funny comment that you should get on the right technology, which is clearly SharePoint, because “Steve Ballmer doesn’t go to the Public Folders conference anymore.”

Next up: The Ultimate Team Site session. While there were a number of fascinating UI enhancements demo’d, what really stuck out for me was the fact that as you work with the ribbon controls in editing your pages, you get much better context focus than you did in MOSS 2007, which means much less navigation hunting and pecking. In other words, when you’re working on a page and perform an operation, the page stays up and a little “lightbox” pop-up shows up on top of a dimmed version of the window you were looking at. When the pop-up closes, you are right back on the page you were editing.

Calendar overlays, a great new feature in SharePoint 2010, were a popular topic at this session, demonstrating again that certain key features like that are the most commonly deployed.

I then moved on to Zach Rosenfield’s session on Multi-Tenancy Capabilities in SharePoint 2010. All I can say is WOW. This one was a mind-blower. You should definitely watch the replay and follow Zach’s blog, which I’ve linked to above (click Zach’s name).

Here are some highlights:

  • Multi-tenancy works with nearly all features, with a few key exceptions, most notably FAST Search and PerformancePoint Services, but also a few others, including mail-enabled lists.
  • You will be able to put multiple tenants on multiple site collections inside a single web app that leverages a partitioned database server. Wow! That’s a whole different… Wow.
  • In a multi-tenant environment, you can manage the set of features available to a tenant, and in so doing you can actually enable SharePoint Foundation (in case you didn’t read my post from Day 1, SharePoint Foundation is the new name for WSS) for one tenant while another tenant on the same infrastructure is running full Microsoft SharePoint Server 2010 Enterprise! Read that again if it didn’t sink in the first time.
  • You can actually delegate a subset of Central Admin functionality to the Tenant Admin role. WOW
  • The multi-tenancy model is configured ENTIRELY with PowerShell. There is no GUI.
  • There are some features that help to facilitate charge backs to be managed effectively, so you can handle billing or cost recovery for the resources used by you tenant fairly simply, though there is no charge back feature per se.
  • The site creation provider enforces db organization across the entire farm, and the cmdlet New-SPSite accepts a parameter that enables you to target a specific content database
  • Zach Rosenfield will post some unsupported, unofficial “starter kit” scripts either to MySPC and/or to his blog. In the meantime, a team is at work building an official starter kit for RTM timeframe.
  • You have the ability to limit each tenant to a distinct feature set and you have pretty granular control over which exact features each one gets. Zach also responded to a question and indicated that if you want to limit which site templates a tenant admin can deploy to this environment, you can sort of achieve this by not turning on the appropriate features. If the features that are part of a template are not turned on, then the template will not be available.
  • Multi-tenancy works well with claims based authentication
  • Every transaction carries a tenant tag which enables rich reporting for multi-tenant solutions, though that reporting is not available out of the box. You can go 3rd party or roll your own for that
  • White papers will be coming out shortly that explain how Microsoft Online Services does multi-tenancy, which is not the only way to do it, but is a good example of how you can do it

After having my mind blown by Zach’s presentation, I went down to the Project Management on SharePoint session co-presented by @meetdux and Bamboo Solutions.

@meetdux did a great job as always with his presentation (“use the word dashboard, and I promise you, you’ll get promoted!”), but I thought the presenters who talked about Bamboo’s tools for PM weren’t enthusiastic enough to get me interested all that much. Still, I think it might be something worth checking out if you’re doing PM in SharePoint.

And that wrapped up sessions for the day. I made a quick call to my family back home while walking my bag back to my room. I donned my @jeffbecraft shirt once more, then met @ThunderLizard for a dinner of Chinese food and sushi at the China Grill in Mandalay Bay, P1330270

where we had the good fortune to sit next to a couple of the partners in the company that makes StoragePoint, a product I find very intriguing. Nice guys, too.

And then, it was time for the beach party. Supposedly, this was the biggest party every thrown on the beach at Mandalay Bay. Easily 2/3 of all attendees were there, so well over 5,000 people most likely, though I didn’t hear a headcount.

The party was 80’s themed, and featured people in 80’s costumes, a good selection of meats on sticks, drinks, some 80’s theme items, like keychain Rubik’s cubes, and of course, some great entertainment.

There was a break-dancing troupe who had some amazing moves. They performed their routine once near the beginning of the party, and once more near the end.


The headliner for the party was Huey Lewis and the News, who sounded fantastic and played just about all of their well-known hits in a tight set. I had actually seen the band play twice back in the 80’s, and I think their sound has held up great.

I got up pretty close to the pool’s edge before I realized that people had thrown off their shoes and waded into the water. P1340593

I stood on the edge a while before a cool lady whose name I didn’t catch told me I needed to get in the water too. So, what the heck. In I went. From that vantage point, I got some great shots of the band, and the show was way more fun to watch and dance to (in the shallow water).



After Huey Lewis’s set wrapped up…


fireworks were launched…


And then those of us standing in the water just kept on dancing

P1360624 P1350433

to a bunch of classic 80’s tunes that were played over the sound system. Eventually, the breakdancers came out again, the dancers in the water kept on going, and things got a little silly (just a little).


There’s something you don’t see everyday. :-)


After the beach party wound down, I headed inside, soggy pant legs and all, and discovered as many did that there was a great request band playing near the edge of the casino. I sat down and watched a few songs before the band took a break and I headed upstairs.

I wasn’t sleepy yet so I went down and played slots a few times, at one point deciding to quite while ahead, clearing $5.16 on $5.00 wagered. Winner, winner, chicken dinner!!


I put the finishing touches on the blog post here, and then bedtime.

Day 3 tomorrow has some more interesting sessions, and the Ask the Experts session in the evening. Stay tuned! http://twitter.com/jeffbecraft

Tuesday, October 20, 2009

Gear and Tweeting in Las Vegas: Day 1 of SharePoint Conference 2009

The morning got off to a leisurely start. #SPRunners were out early (and they recruited a new member! Not me - Steve Ballmer). Most everyone else seemed to catch enough rest to make it through a day of what ran the risk of being death by a thousand PowerPoints.

By 7:45, however, a large crowd had made the pilgrimage to the Shoreline A room at Mandalay Bay Convention Center, where breakfast was served before a pair of keynote speeches headlined by Steve Ballmer and Tom Rizzo, and Jeff Teper from Microsoft.


This is a fairly big conference, sold out at 7500+ people last I heard.


But, at the same time, the crowd seems small and intimate, which is no accident. Any time you have this many people who work with passion on enhancing collaboration, teamwork facilitation and social networking on- and offline, it’s going to be a friendly atmosphere.

There is a large contingent here that religiously read each other’s blogs, meet up periodically for various SharePoint Saturday events and other conferences and shows, regional user groups, and of course, force the occasional fail whale to pop up on Twitter. Twitter, in fact, is so prevalent, that many people made t-shirts with their usernames on them so they could more easily spot each other at the show (yours truly, @jeffbecraft, included).


My colleague Adam Duggan (@thunderlizard) and I were fortunate to run into Kirk Evans (@kaevans) from Microsoft, who is shooting a number of casual interviews for Microsoft’s Channel 9 (http://channel9.msdn.com/posts/kirke/) while at the show. Hopefully, we can sit down for one later in the week <shameless plug>so I can talk with him about Enterprise Hosting for SharePoint on AT&T’s fully virtualized Synaptic platform. </shameless plug> We sat down to breakfast and met some great people from a number of other firms.

After breakfast, we made our way over to the keynote, and awaited Mr. Ballmer’s arrival.


I had signed up with www.endusersharepoint.com to be a live blogger at the conference, and also promised several of my customers that I would be providing frequent updates from the show, so I had my <shameless plug> AT&T Blackberry Curve </shameless plug> charged up and ready to Tweet away as the big announcements came.

It didn’t take long. And not to worry, death by a thousand PowerPoints this was not.

Right off the bat steveb@microsoft.com (Ballmer encouraged attendees to email him with their feedback), announced that the SharePoint 2010 public beta would be available in November, just a few short weeks away.


Highlighting the increasingly heavy use of mobile devices, Ballmer pointed out that “someone” (widely suspected to be either @meetdux or @gannotti) had a streaming video of the hall from the back of the room, though he said it revealed that there weren’t as many illuminated screens as you’d think in people’s hands, the reason being that mobile devices are still not seamlessly capable of handling the kinds of tasks that the PC can. But, he noted, Microsoft is investing heavily in this area, and it all ties into the vision for SharePoint to provide seamless collaboration across all of your screens (PC, mobile, TV, and more).

Speaking of investing heavily, there was a word that Ballmer used several dozen times in rapid succession during his talk, and that word was “cloud.”


Though he was careful to point out that there is a clear distinction between public Internet-facing SharePoint sites and the cloud-based services that could be used to serve needs on all three –nets (intra/extra/Inter) over the Internet, Ballmer made it very clear that SharePoint in the cloud through Microsoft Online Services and partners including <shameless plug>AT&T Hosting & Application Services</shameless plug> is a big aspect of Microsoft’s investment and strategy behind SharePoint 2010.

He also emphasized that SharePoint 2010 supports easy mixing and matching of on-premises and cloud-based SharePoint to support different features and aspects of SharePoint in the model most appropriate for the customer, offering a win-win to customers who want the ease of management that comes with cloud services, but the reduced latency concerns that come with on-premises hosting of collaboration sites, for example.

Ballmer reiterated a key point about SharePoint –- that it enables IT to do more with less better than any other product – before giving the floor to Tom Rizzo, who promptly announced that Visual Studio 2010 beta 2 is now available, before launching into some fascinating demos of how the developer experience has been improved in SharePoint 2010 and Visual Studio 2010 (note: Visual Studio 2010’s enhanced features for SharePoint 2010 do not work with earlier versions of SharePoint.


What struck me most about the developer enhancements, and judging from the tweets I saw come across during the demos, perhaps the most highly appealing new capabilities were:

  • Business Connectivity Services, which is a superset of MOSS 2007 Enterprise’s Business Data Catalog, which now provides two-way integration with business applications, enabling significant work to be done with insignificant amounts of custom code. These enhancements are so appealing that @erickraus tweeted “BDC to BCS is like the ugly girl in high school who got hot in college.”
  • Visual Web Part development, which enables the creation of custom web parts without all the hand coding that has been necessary in the past
  • Greatly simplified deployment of SharePoint custom code
  • The Developer Dashboard, which offers detailed statistics on page load times, query times, etc. to help developers get a sense of where their solution is creating performance bottlenecks (a real bonus for the IT side, too).
  • Significantly enhanced debugging capabilities in Visual Studio 2010 that will make developers of custom code for the SharePoint platform much happier

Ballmer came back to briefly reiterate how important cloud services are to the overall SharePoint 2010 strategy, but also how he sees SharePoint for Internet sites exploding in 2010, way beyond the level of adoption seen with MOSS 2007. And then Ballmer mentioned cloud again. And again.

He also mentioned that what we’ve known in the past as Windows SharePoint Services has been rebranded SharePoint Foundation 2010. It remains the core services on which the Server platform is built.

Picking up on Ballmer’s theme of “exploding” growth in SharePoint for Internet sites, Rizzo then launched into a demo of some great new features from SharePoint and the newly added FAST search features, supporting Internet Sites on SharePoint 2010, including:

  • One click page layout for content editors, which empowers IT and developers to create a set of Page layouts that content editors can choose from “in one click” and quickly get content loaded into them, a nice time saver.
  • “Queryless” search where search capabilities are melded seamlessly into the user experience so they never have to type anything into a search box to do a powerful search.
  • “Immersive experiences” can seamlessly interweave SharePoint, Silverlight and FAST search to create a powerful and intuitive user interface

And that was just one session.

Next up was Jeff Teper, whose key theme was that SharePoint over the years has “made the web easier.”

Teper provided the welcome news that branding in SharePoint 2010 is much much easier than it was in MOSS 2007 and earlier versions. Among the built-in UI improvements is a big leap forward in accessibility, a major concern for organizations of all types, most particularly government and large enterprises.

So-called “Web 2.0” enhancements include some great improvements to blogs and wikis (including ability to edit and leverage wiki markup more easily within SharePoint) as well as impressive new social tagging features that can help enterprises capitalize on the trends in social networking that employees are seeking to foster closer engagement with their teammates, who thanks to current budget constraints, they may never see in person, especially in large and geographically dispersed organizations. Web 2.0 enhancements also include:

  • Built-in tag cloud in MySites
  • Facebook and Twitter-like network update stream for the enterprise
  • Richer people search, profiles, and social networking features that are a big leap forward

Even SharePoint’s core features got some big improvements, including:

  • Much larger scale for lists and libraries, enabling a volume of items far beyond the limits imposed by SharePoint 2007
  • Multi-page checkout
  • Document sets, essentially multi-document content types
  • Browse to digital assets on your hard drive, upload directly to your asset library in the background as your digitl asset appears within your page
  • Taxonomy management including the ability to leverage taxonomy and content types across multiple farms (not just multiple site collections)
  • Some great new features focused on governance, including features the help to enforce governance plans automatically
  • Improved records management that includes the ability to mark a document a record without moving out to a distinct Records Center
  • Digital asset management and true streaming within SharePoint lists and libraries
  • Name search now include phonetic spelling matches, so if you spell Jeff Teper’s last name “T-E-P-P-E-R” by mistake, his name will still come up in search. Not having this capability was a big irritation with name search in SharePoint 2007, especially for my large enterprise customers.

SharePoint and Office continue to draw ever more tightly intertwined, which is NOT a bad thing for end users. Office 14 (they skipped unlucky 13) offers some amazing new capabilities including:

  • Excel and Excel Services now offer PowerPivot to make vast sheets of data (100 million+ rows) quick and easy to filter and sort.
  • Terrific metatagging and property coordination between Office and SharePoint, enabling easy searching regardless of the client used.
  • All Office 2010 apps have web-based clients available and coordinate 2-way with SharePoint 2010

And then there’s what’s altogether new in SharePoint 2010:

  • PerformancePoint Services, directly integrated with SharePoint 2010 (the standalone product PerformancePoint Server is no longer going to be offered)
  • Seamless offline and mobile capabilities through tighter integration with SharePoint Workspace (formerly Groove) and mobile devices (not just the Windows Mobile platform either)
  • A major shift in SharePoint 2010 is the move in command line scripting, deprecating the use of the stsadm executable that was the key to scripting in SharePoint 2007, in favor of PowerShell, which offers some tremendous improvements in flexibility and efficiency for scripting. At beta launch in November, well over 500 cmdlets for PowerShell will be included. You’ll also be able to manage your SharePoint farm remotely from your Windows 7 desktop, using PowerShell. (A –whatif parameter even lets you preview what your command will do when you actually do run it!)
  • Usage analysis capabilities are greatly improved in SharePoint 2010, and they include the ability to write custom reports against the logging database schema, which is now supported! (not for content databases, though; only the logging database)
  • Patching is vastly simplified, and requires virtually no downtime as patches are applied
  • Support for competing browsers is there, @arpanshah even demonstrated large list operations in Firefox.
  • Ability to upgrade to SharePoint 2010 under the hood without modifying your UI right away, permitting a less complicated upgrade project to occur with IT and Development/Branding completely isolated.


Altogether, a very impressive pair of opening keynotes, with some mind-blowing demos.

Following the keynotes was a decent lunch of cold cut sandwiches, and then the breakout sessions began.

I started off with the @joeloleson/@mikewat session on a day in the life of a SharePoint 2010 Administrator, which was in the nearly impossible to locate South Pacific C room, but nevertheless got a strong turnout.


Mercifully, this was not a huge Quest tools sales pitch (for that, there’s a massive booth on the trade floor), but a real assessment of what is important to focus on for IT pros and admins with SharePoint 2010. The list includes the following:

  • Upgrading to SharePoint 2010 is going to be the easiest for the Administrator. The business users, content contributors and end users are going to have lots of new features to learn, but fundamentally, the processes are the same for the Administrator as they were in SharePoint 2007
  • Shared Service Providers are gone from SharePoint 2010. They are replaced by Service Applications, which offer more flexibility in terms of turning them on as you figure them out, in placing them strategically on varying servers, etc.
  • Search indexing is no longer a single point of failure in SharePoint 2010. There is redundancy in the indexing.
  • PowerShell makes scripted deployments for easy scale out much easier than before. PowerShell has everything stsadm has, plus 300+ additional cmdlets!
  • Claims based authentication offers a fascinating but daunting capability that’s going to be great for your external sites
  • “So you thought you had a lot of databases in MOSS?” – In SharePoint 2010, a single web application, with all service applications “lighted up” will create no less than 19 distinct databases.
  • The SharePoint 4.0 Management Console gives you an easy place to start firing off your Powershell cmdlets, and includes enough to do “essentially anything” while being easily extensible to do anything Microsoft may not have thought of
  • Moving sites and site collections around in SharePoint 2010 is much much easier than in SharePoint 2007, but these Quest employees were quick to point out, they do not replace the powerful 3rd party tools available from partners.
  • SharePoint 2010 can be configured to be Data Mirroring aware to it can automatically failover to the mirror site you specify (and which you have to have created in advance, of course)
  • The SharePoint Designer story is a much better story in 2010
  • As far as the upgrade process, there is a two stage upgrade process. First, the binary upgrade, which is the focus of your IT Department or Managed Hosting provider, and the visual upgrade, which is the responsibility of your developers and designers. The beauty of the process is that the two stages can be performed separately, so you can get onto SharePoint 2010 sooner, without having to worry about UI issues immediately. This will reduce upgrade project coordination complexity and risk.
  • The Best Practice Analyzer has great self-healing features
  • Sandboxing and Developer Dashboard features are supposedly for developers, but are a real bonus for IT pros as well

Joel Oleson pointed out that the best practices for all of the new SharePoint 2010 stuff don’t exist yet, which is undoubtedly music to the ears of the hardcore SharePoint people in the community (@deannie called this “a great opportunity for the community”), whose blogs are starved for content about the new major release. (The beta period Non-Disclosure expired today, opening the floodgates on pent up blog posts coming to a browser near you soon).

He did, however, recommend 8 GB of RAM as a practical minimum on the machine hosting your SharePoint 2010 VM. (to which @mferraz noted “16 GB is even better for collab”).

Another best practice is to “test test test” your upgrade early (in the beta period) to see where you are going to run into gotchas.

Oleson also recommended that IT Pros and Admins take the install, PowerShell and Upgrade Hands-On Labs while at the show. I could not find an install HOL, but was very impressed with the PowerShell capabilities after taking that Hands On Lab. I’ll dig into the Upgrade HOL on Tuesday if time permits.

Toward the end of the session, attendees got to see Joel’s blog at http://www.joeloleson.com upgraded from MOSS 2007 to SharePoint 2010.

Next up for me was the pair of light-hearted Administration overview sessions led by Todd Klindt and Shane Young from www.sharepoint911.com.


Here are some highlights:

  • “Almost anything you can imagine you can put together with service apps” (successors to Shared Service Providers) in SharePoint 2010. Service apps can be shared across farms, not just site collections. You can also run multiple instances of the same type of service app. The Farm Configuration Tool provides an easy, quick setup wizard for all of the new service apps, but it is only practical when it comes to setting up your all-in-one dev VM of SharePoint 2010 beta. For Prod environment, you need to manually create those. If you DO use the wizard, you’ll get ugly guids in the database names, because SharePoint doesn’t know what’s on the SQL Server, so it assumes you might have another farm’s same name dbs there. Thus, the guids, to differentiate. Note that multiple farms can share a database instance, but SHOULDN’T.
  • ISVs can create their own service apps for SharePoint 2010
  • Plan your strategy for laying out your site collections (and expect quite a few of them for large organizations)
  • 100 GB is still a good ceiling for the practical size of a single content database
  • SharePoint 2010 REQUIRES 64-bit or more (“If you can get more bits in there, do it!” Klindt says)
  • Let the PreRequisite wizard take care of installing IIS for you
  • The PreRequisite wizard will default to Internet download method for anything it says you need, but you can direct it instead to local files
  • The new Farm Passphrase is meant to prevent a problem with an old risk involving the setup account in the case it was that of a user who has separated from the company
  • There are a lot of great AD-driven policy improvements in SharePoint 2010, but SharePoint still doesn’t make you do anything to your AD schema just to support SharePoint
  • Claims auth, which on the upside, allows you to log in through a variety of authentication sources, makes kerberos auth configuration and setup “look like a walk in the park”
  • Some mixing and matching of 32-bit and 64-bit is possible in SharePoint 2007, though each tier should have the same architecture at least. In 2010, everything has to be 64-bit.
  • Best Practice: Start moving to Windows 2008 64-bit and SQL Server 2005 64-bit or SQL Server 2008 64-bit, so that’s out of the way and your solution is stable on 64 bit.
  • Best Practice: Always choose Advanced, always choose Complete when installing
  • Managed accounts in SharePoint 2010 simplify the service account insanity we had in SharePoint 2007
  • Tip: If the wizards aren’t working, make sure Central Administration is in a zone that can run scripts!
  • ISA and ForeFront for SharePoint 2010 are “not fully baked” yet; presumed to be essentially the same experience as we have in SharePoint 2007 with those tools
  • Tip: There is a gotcha with PowerShell if you are installing on Windows 2008 where you are NOT using R2 (see kb971831)
  • Virtualization is now supported – encouraged even, but Todd and Shane don’t recommend SQL Server on a VM due to I/O concerns, so only do that if you or your DBA or consultant have expert skills
  • Best Practice: “Everything stsadm can do, PowerShell can do better” – Todd and Shane recommend Zach Rosenfeld’s PowerShell breakout session coming up later in the conference. The PowerShell commands run much more efficiently than stsadm commands do.
  • Granular backup is a major new feature in SharePoint 2010 for IT Pros and Admins. Backup at the site collection, web, list or library level. You can optionally use an “unattached content db” to recover without setting up a recovery farm. Idera even has a tool that can mount a database backup file without SQL Server, which offers additional advantages to using this technique.
  • Throttling and Performance Management features can prevent users from making requests that would bring your farm to a crawl. But, even if you determine that some users MUST access thousands of rows in a list all at once, you can enable this only during certain times of day using a setting called Daily Time Window. Generally, you’ll want to configure auto-throttling to be invoked during heavy load periods, BUT you want to turn it off on your beta VM where it is on my default, because otherwise, you are bound to bring your machine to a crawl as throttling fights the underpowered VM
  • Correlation ID in your logs (in the “14 hive” – formerly the 12 hive, and remember they skipped unlucky 13) allow trace across the whole farm for more precise troubleshooting
  • The ULS logs offer flood protection which stops logging repeated error messages if they occur within a close window of time. A log entry notes that additional instances of that error were not written to the log for this reason.
  • ULS log to SQL with a published schema means you can query against the log database schema and be fully supported in doing so
  • The Health Analyzer rules run under the SPTimer service. Server affinity enables you to specify which server is the one where timer jobs should run. You can also run SPTimer jobs on demand in SharePoint 2010. And when you do, you get a progress bar to see how far along the job is in its execution. Also status, outcomes and detailed reporting on what failed make troubleshooting much easier.

Todd Klindt said to check out his blog www.toddklindt.com/blog for some stuff that didn’t make it into their slide deck in time. Also watch for the SharePoint 2010 “snack” videos to become available online later this week.

Not bad for the first day, eh?

By the way, after my sessions, I got lucky. Because I had stuck one of their stickers on my laptop bag, the folks at @CriticalPath rewarded me with a free course!

Lastly, I visited the Visio 2010 team’s happy hour IMG00608at the EyeCandy bar in the middle of the Mandalay Bay casino, and then @ThunderLizard and I grabbed dinner at Lupo by Wolfgang Puck. Outstanding pork chops!!

Whew… I’m ready to hit the pillow and start all over again tomorrow.

Follow me at http://twitter.com/jeffbecraft or using the hash #spc09 for live microblogging during the sessions I attend each day.