Clients are Jerks: aka How Halo 4 DoSed the Services at Launch & How We Survived

At 3am PST November 5th 2012 I sat fidgeting at my desk at 343 Industries watching graphs of metrics stream across my machine, Halo 4 was officially live in New Zealand, and the number of concurrent users began to gradually increase as midnight gamers came online and began to play.  Two hours later at 5am Australia came online and we saw another noticeable spike in concurrent users.

With AAA video games, especially multiplayer games, week one is when you see the most concurrent users.  Like Blockbuster movies, large marketing campaigns, trade shows, worldwide release dates, and press all converge to create excitement around launch.  Everyone wants to see the movie or play the game with their friends the first week it is out.  The energy around a game launch is intoxicating.  However, running the services powering that game is terrifying.  There is nothing like production data, and we were about to get a lot of it over the next few days.  To be precise Halo 4 saw 4 million unique users in the first week who racked up 31.4 million hours of gameplay.

At midnight on November 6th PST I stood in a parking lot outside of a Microsoft Store in Seattle surrounded by 343i team members and fans who came out to celebrate the launch with us and get the game at midnight PST.  I checked in with the on call team, Europe and the East Coast of the US had also come online smoothly.  In addition the real time Cheating & Banning system I wrote in a month and half before launch had already caught and banned 3 players who had modded their Xbox in the first few hours, I was beyond thrilled.  Everything was going according to plan so after a few celebratory beers, I headed back into the office to take over the graveyard shift and continue monitoring the services.  The next 48 hours were critical and likely when we would be seeing our peak traffic.

As the East Coast of the United States started playing Halo after work on launch day we hit higher and higher numbers of concurrent users.  Suddenly one of our APIs related to Cheating & Banning was hitting an abnormally high failure rate, and starting to affect other parts of the Statistics Service.  As the owner of the Halo 4 Statistics Service and the Cheating & Banning Service I Ok’d throwing the kill switch on the API and then began digging in.

The game was essentially DoSing us.  We were receiving 10x the number of expected requests to our service on this particular API, due to a bug in the client which reported suspicious activity for almost all online players.  The increased number of requests caused us to blow through our IOPS limit in Azure Storage, which correctly throttled and rejected our exorbitant number of operations.  This caused the request from the game to fail, and then the game would retry the request three times, creating a retry storm, only exacerbating the attack.

Game Over Right?  Wrong.  Halo 4 had no major outages during launch week, the time notorious for games to have outages.  The Halo 4 Services survived because they were architected for maximum availability and graceful degradation.  The core APIs and component of the Halo Services necessary to play the game were explicitly called out and extra measures were taken to protect them.  We had a game plan to survive launch, which involved sacrificing everything that was not those core components if necessary.  Our team took full ownership of our core service’s availability, we did not just anticipate failure, we expected it.  We backed up our backups for statistics data requiring multiple separate storage services to fail before data loss would occur, built in kill switches for non essential features, and had a healthy distrust of our clients.

The kill switch I mentioned earlier saved the services from the onslaught of requests made by the game.  We had built in a dynamically configurable switch into our routing layer, which could be tuned per API.  By throwing the kill switch, we essentially re-routed traffic to a dummy handler which returned a 200 and dropped the data on the floor or logged it to a storage account for later analysis.  This stopped the retry storm, stabilized the service, and alleviated the pressure on the storage accounts used for Cheating & Banning.  In addition, the Cheating & Banning service continued to function correctly because we had more reliable data coming in via game events on a different API.

The game clients were being jerks (a bug in the code caused an increase in requests) so I had no qualms about lying to them (sending back an HTTP 200 and then promptly dropping the data on the floor) especially since this API was not one of the critical components for playing Halo 4.  In fact had we not built in the ability to lie to the clients we most certainly would have had an outage at launch.

But the truth is the game devs I worked closely with over countless tireless hours leading up to launch weren’t jerks, and they weren’t incompetent.  In fact they were some of the best in the industry.  We all wanted a successful launch so how did our own in house client end up DoSing the services? The answer is Priorities.  The client developers for Halo 4 have a much different set of priorities: gameplay, graphics, and peer to peer networking were at the forefront of their mind and resource allocations, not how many requests per second they were sending to the services.

Client priorities are often very different than the services they consume, even in house clients.  This is true for games, websites, mobile apps, etc…  In fact it is not only limited to pure clients it is even true for microservices communicating with one another.  These priority differences manifest in a multitude of ways: sending too much data on a request, sending too many requests, asking for too much data or an expensive query to be ran, etc… The list goes on and on, because the developers consuming your service are often focused on a totally different problem and not your failure modes and edge cases.  In fact one of the major benefits of SOA and microservices is to abstract away the details of a service’s execution to reduce the complexity one developer has to think about at any given time.

Bad client behavior happens all over the place not just in games. Astrid Atkinson just said in her Velocity Conf talk “Google’s biggest DoS attacks always comes from ourselves.”  In addition, I’m currently working on fixing a service at Twitter which is completely trusting of internal clients allowing them to make exorbitant requests.  These requests result in the service failing, a developer getting paged with no means of remediating the problem, and the inspiration for finally writing this post.  Misbehaving clients are common in all stacks, and are not the bug.  The bug is the implicit assumption that because the clients are internal they will use the API in the way it was designed to be used.

Implicit assumptions are the killer of any Distributed System.

Truly robust, reliable services must plan for bad client behavior and explicitly enforce assumptions.  Implicitly assuming that your clients will “do the right thing” makes your services vulnerable.  Instead explicitly set limits and enforce them either manually via alerting, monitoring and operational runbooks, or automatically via backpressure, and flow control.  Halo 4 launch was successful because we did not implicitly trust our clients instead we assumed they were jerks.

Much thanks to Ines Sombra for reviewing early drafts

You should follow me on Twitter here

Clarifying Orleans Messaging Guarantees

There has been some confusion around Orleans messaging guarantees, that I wanted to take a second to clarify.  In past talks on Halo 4 and Orleans I mistakenly mention that Orleans supports At Least Once Messaging Guarantees.  However this is not the default mode.  By default Orleans delivers messages At Most Once.

Its also worth pointing out that the paper Orleans: Distributed Virtual Actors for Programability and Scalability in section 3.10 says “Orleans provides at-least-once message delivery, by resending messages that were not acknowledged after a configurable timeout,” which identifies the non-default configurable behavior.  This along with some of my talks has led to some of the confusion.

In Orleans when messages are sent between grains, the default messaging passing is request/response.  If a message is acknowledge with a response, it is guaranteed to have been delivered.  Internally, Orleans does best effort delivery.  In doing so it may retry certain internal operations however this does not impact the overall application messaging level guarantees of At Least Once Messaging.  This is similar to TCP, TCP may retry internally but the application code using the protocol will receive the message once or zero times.

Orleans can be configured to do automatic retries upon timeout, up to a maximum amount of retries.  In order to get at least once messaging you would need to implement infinite retries.  Enabling retries is not the recommended configuration since in some failure scenarios it can create a storm of failed retries in the system.  It is recommended that the application level logic handles retries when necessary.

In the Halo Services we ran Orleans in the default mode, At Most Once Message Delivery.  This guarantee was sufficient for some services like the Halo Presence Service.  However, the Halo Statistics Service needed to process every message to guarantee that player data was correct. So in addition to using Orleans to process the data we utilized Azure Service Bus to durably store statistics and enable retires to ensure that all statistics data was processed.  The Orleans grains processing player stats were designed to get messages at least once, leading us to design Idempotent operations for updating players statistics.

I hope this helps clarify Orleans messaging guarantees.  This has also been documented on the Orleans Github Wiki.

You should follow me on Twitter here

Tech WCW #2 – Frances E. Allen

1200px-Allen_mg_2528-3750K-bThe A.M. Turing Award has been awarded to one individual every year by the Association for Computing Machinery since 1966.  The recipients are chosen based on their technical contributions to the field of computing.  The Turing Award is often referred to as the “Nobel Prize” of Computing, it is generally regarded as the highest honor in the field.  The first 40 years after its inception it was awarded to men, but in 2006 Frances E. Allen became the first woman to win Computer Science’s most prestigious award.

Allen won the Turing Award for her “pioneering contributions to the theory and practice of optimizing compiler techniques that laid the foundation for modern optimizing compilers and automatic parallel execution.”  Her work spanning 45 years at IBM laid the groundwork for modern compiler optimizations and automatic parallel execution of code.  Not only did she develop the theory, she executed on it developing high performance compilers and systems throughout her career.  These compiler techniques are fundamental to how we write and develop code today.

Allen grew up in a small town in upstate New York.  She gravitated towards mathematics in high school, and went to Albany State Teachers College (now SUNY Albany) to pursue her degree.  After returning to her hometown to teach high school mathematics for a brief period of time, she continued her education at the University of Michigan Ann Arbor.  At Michigan she began taking courses in computing and in 1957 she graduated with a Masters degree in Mathematics.

Allen originally had planned to return to Peru, New York and continue teaching at her local high school, but IBM recruited her during one of its on campus career fairs.  Her mathematical and computing prowess coupled with her ability to teach resulted in a role at the Thomas J. Watson Research Center at IBM, where she taught staff scientists John Backus’s newly developed programming language FORTRAN.

At the time programs were written in assembly or machine code, and the staff scientists were incredibly good at optimizing code for their specific machine and task.  Many were skeptical that learning this new high level language was beneficial.  However Allen, with Backus’s help was able to win over the IBM scientists sighting two main goals of the developing language: programmer productivity and application performance.  These two goals would become a theme of Allen’s career.

In order to teach the class Allen first had to learn the language herself.  She was often learning the language components only a few weeks before her students.  In order to learn FORTRAN Allen began reading the compiler source code, and this sparked her interest in compilers.  “It set my interest in compiling, and it also set the way I thought about compilers, because it was organized in a way that has a direct heritage to modern compilers,” Allen remarked.

Allen’s next big project at IBM was working on the compilers for the Stretch and Harvest supercomputers.  Stretch was IBM’s first transistorized supercomputer, and Harvest was a custom add on built for the NSA to do code breaking of secret messages.  Harvest, with its stream coprocessor and the TRACTOR magnetic tape system, could process 3 million characters per second.  Allen and her team set upon the daunting task of building one compiler framework, targeting two machines and operating on three source languages.  This was an incredibly ambitious project at the time given that most compilers were written in assembly targeting one machine.

During the Stretch/Harvest project Allen also was IBM’s liaison with the NSA.  She helped coordinate the design of the Alpha programming language, designed to detect patterns in arbitrary text.  In 1962 Allen spent a year with the NSA installing the system and defining acceptance tests.

The Stretch/Harvest program is often viewed as a major failure because it made aggressive performance estimates, promising 100 times the speed of the IBM 704, when in actuality it was only 30 times faster.  However the compiler work Allen was involved in was incredibly successful, this was one of the first instances of a compiler sharing an optimizing back end which could compile multiple languages and produce code that could run on multiple machines.

Allen continued her work on compilers when she joined Project Y, the Advanced Computing Systems (ACS) Project.  This was one of the first compilers built for a CPU that did not process instructions one at a time, but instead could work on multiple instructions simultaneously.  This concurrency introduced a whole new set of challenges, and so new techniques were used to optimize the compiler including Flow Analysis, instead of just representing programs as a linear sequence of statements the compiler now represented it as a graph that could be analyzed to discover further optimizations like re-using a computed value in another region of code.  Allen’s work on the ACS compiler led to programs that could execute much faster than in previous systems.

One of Allen’s final projects for IBM was the Parallel TRANslation Group (PTRAN) a compiler which took FORTRAN programs written for linear execution and generated code capable of executing on parallel computer architectures.  This project introduced the concepts of the program dependence graph, a representation now used by many parallelizing compilers to detect and extract parallelism from sequential code.

In 1989 Allen became the first female Fellow at IBM.  In 1991 she became an IEEE fellow, and in 1994 she became an ACM Fellow.  In addition in 2002 she received the Augusta Ada Lovelace Award from the Association of Women in Computer Science.  In 2002 Allen retired from IBM.

Throughout her career Allen focused on taking programs as programmers like to write them, and making them run efficiently by doing sophisticated analysis and optimization of the code.   Arguably without Allen’s compiler optimization work, FORTRAN would not have been as successfully adopted in early computing due to programmer’s reluctance to give up performance for high level abstractions.  Without Allen’s work we would not have modern compilers with Flow Analysis and Parallelization of Code.  Her work has truly had a lasting impact, and a major influence on how computing is done today.

Read more about what TechWCW is, and check out all of my Tech Woman Crush Wednesdays.

Seattle to San Francisco

“A journey is a person in itself; no two are alike … we find that we do not take a trip; a trip takes us”

At the end of December I made the move from Seattle to San Francisco.  After all of my things were packed in Seattle I started the drive down the coast.  I headed down the 101, through Oregon, and made a detour over to US Route 1 in California.  On the morning of December 31st 2014, after 3 days on the road, I crossed over the Golden Gate Bridge and arrived in my new city.

TechWCW #1 – Poppy Northcutt

5884B9D3-E5FE-440A-A516-E80879096317

In December of 1968, Apollo 8 was the first manned spacecraft to leave earth, orbit the moon, and return safely.  The crew consisted of three male astronauts; Frank Borman, James Lovell, and William Anders.  There were also dozens of men back at Mission Control, but only one woman: Twenty-five year old Frances “Poppy” Northcutt.

Poppy Northcutt was the first woman to work as part of NASA’s Mission Control.  She was the first woman to put on the headsets.  She helped design and write the computer program which plotted the return to Earth trajectory for Apollo 8 and subsequent missions.

Northcutt started her career by atimagetending the University of Texas and studied Mathematics because she wanted to get “a man’s job … there were advantages to doing things where you could get paid more and avoiding women’s work,”  she remarked when asked about her choice of major.  She graduated in three and a half years and then went to work for TRW, an aerospace contractor, in 1965.   At the time TRW had a contract to partner with NASA on the Apollo program which Northcutt began working on.

Northcutt’s first job title at TRW was Computress, a female technical aid that did a lot of data analysis.  She quickly started asking a lot of questions and taking the source code home every night to read it.  “I started looking around at these dudes that were working with me and I thought, ‘You Know, I’m as smart as they are,” said Northcutt.  The other engineers noticed her talent and she was quickly promoted to “member of the technical staff”, the general term for engineer.

Northcutt’s contributions to the return to earth trajectory program were incredibly valuable, as she found a flaw and corrected it in the early design.  37959D22-FBE4-41BF-9F9B-6E610F2E88A1

So on December 24th 1968 Northcutt sat wearing the headphones in Mission Control the day Apollo 8 rounded to the Dark Side of the Moon, where no communication with the astronauts was possible.  “Everyone in the room is not breathing… Nobody’s heart is beating.  We’re just totally still waiting,” recalls Northcutt.  When Misson Control regained communication with Apollo 8 after it circumnavigated the moon cheers broke out in the room and on December 27th 1968 Apollo 8 returned safely to Earth.

Northcutt continued to work for TRW on the Apollo missions, working in the real-time computer center.  She once again played a pivotal role in Apollo 13.  For this launch Northcutt had traveled to Kennedy Space Center to see the launch in person as her team was not originally scheduled to be at Mission Control during the flight.  However, when Apollo 13 ran into trouble, several frantic attempts were made to reach Northcutt as she had an unlisted number.  Eventually she was reached and rushed back to Texas.  Northcutt and the other engineers on the team were able to help recalculate Apollo 13’s return trajectory and bring the crew safely home.  For their part in this mission, Northcutt and her team members were awarded the Presidential Medal of Freedom Team Award.

While continuing to work at TRW, Northcutt attended law school at the University of Houston Law Center and graduated in 1984.  She is currently a criminal defense lawyer.

 

Read more about what TechWCW is, and check out all of my Tech Woman Crush Wednesdays.

TechWCW Intro

A few weeks ago, I stumbled across a photo of Margaret Hamilton next to a stack of source code. It’s a powerful image and I found myself wondering about the girl in the photo so I began to do “research” aka stalk her on the internet.

The more I read about Margaret Hamilton, the more excited I became. Here was a woman doing unprecedented things with computers and technology, it was inspiring to see. I quickly sent out a few tweets with what I had discovered, and they quickly gained a lot of attention.




As my tweets gained traction, I continued to stare at the photo of Margaret and wished I had known about her as a girl, when I was learning to code. How inspiring it would have been to have that picture as a poster on my wall. I began to wonder how many amazing women from computing’s history I did not know about. So I composed one more tweet.


This tweet also gained a lot of traction, and so to satisfy my own curiosity and the internet I’m starting Tech Woman Crush Wednesday #techwcw, with an accompanying blog series.

The hashtag #wcw has been around for some time on Twitter, Instagram, Facebook, and Tumblr. Users participate by sharing a picture of their favorite females who they admire or who inspire them. #techwcw is my attempt to continually inspire myself, and showcase incredible technical women who have done or are doing exceptional work.

This by no means is meant to be a definitive list, nor will it have any rank or order.  My plan is to release a new post featuring my current #techwcw at least once a month on the first Wednesday. Ideally there would be one once a week, but I have my own tech things to accomplish, other projects under way, and time constraints. If you have amazing technical women you think I’d enjoy learning about or would like to see featured, please send them my way, or play along on your favorite form of social media.

Check out all of my Tech Woman Crush Wednesdays

You should follow me on Twitter here

Playing with Penguins in Patagonia & Other South American Adventures

I caught a glimpse of myself unabashedly smiling in the rearview mirror of the car taking me to the airport.  Just a few hours before I’d gone through my usual pre-travel jitters.  I had thoughts of bailing, irrationally wishing that I could stay in Seattle and idle the days away in the comfort of familiarity, but now that the trip was underway I had a smile big enough to devour the world.  The world is big and bright and beautiful and I was off to explore another part of it.

I traveled from Seattle to Santiago via Dallas, but my true destination was even farther south.  However, I had an 7 hour layover and a travel companion to pick up in Santiago so I grabbed a cab into the city for lunch, a bottle of wine, and a jaunt up San Cristobal Hill before returning to the airport to hop on another plane to Punta Arenas.  After 3 planes and nearly 30 hours of traveling I finally stepped out into the cold crisp air in Punta Arenas, Chile.  It felt like I was standing at the Edge of the World.

The first order of business in Patagonia, finding some Penguins.  We boarded a boat which took us to Isla de Magdalena, an island populated by 120,000 Magellan Penguins.  Our speedboat sped through the Magellan Straight, before docking at the island brimming with Penguins, clearly I was beyond excited.  We spent an hour wandering around the wind swept island among the penguins, many of whom were nesting, anxiously awaiting for their eggs to hatch.  I contemplated staying on the island forever, perhaps I could become the Jane Goodall of Penguins, but there were other adventures in Patagonia to be had.

_MG_6945

_MG_6969

_MG_7025

We departed the edge of the world in a tiny Chevy Optra Hatchback, with a manual transmission.  Our destination, Puerto Natales, which would be our base camp for exploring Torres del Paine over the next few days.  Side note, learning to drive a manual car in South America was hilariously fun.

Torres del Paine was breathtakingly beautiful.  I constantly found myself stopping and staring mouth agape.  One day we hiked the 13 mile trail to the basin at the base of the towers.  Another day we sailed up the Esperanza Ford in search of glaciers.  In the afternoon we dawned what can best be described as a bright orange all weather snuggie, before boarding a Zodiac to continue up the Serrano river and back into the park in pursuit of more glaciers and breathtaking views.

Torres del Paine was outstandingly beautiful and diverse.  The red sandstone towers jutted up inside a ring of granite mountains, green blue glacial alpine lakes were plentiful, and waves rolled through the inky blue waters of the larger lakes.  Guanacos, deer/alpaca like animals, roamed the grasslands, along with ostriches and dozens of foreign birds.  The whole scene was surreal the photos can not even possibly do it justice.

_MG_7195Base of the Towers

Guanacos

Torres del Paine

More photos from the Edge of the World

We spent our last night at the edge of the world eating Guanaco, they are pretty tasty, and having drinks at a sky bar overlooking the water facing south.  Cheers from the Edge of the World!  I was sad to be leaving, but once again we were off to explore warmer climates.

Santiago

En route to Buenos Aires, we had another long layover in Santiago, so we headed back into the city, grabbed lunch and checked out the art museum before continuing on.

More photos from Santiago

Buenos Aires reminded me of a combination of New York City and Paris.  It’s a large city filled with a variety of neighborhoods.  We wandered through the Recoleta stopping at the Cemetery.  In the Palermo we did some shopping and hung out at cafes.  We strolled through the uber rich apartment complexes and to the water front in Puerto Madera, and rode bikes to La Boca and San Telmo.

We snacked on empanadas and tortas during the day and ate Bife de Chorizo and drank Malbecs from Mendoza for dinner.  And of course we went and saw Tango.

Plaza de Mayo, Buenos Aires

Jacunda Trees, Buenos Aires Buenos Aires Cemetery

More photos from Buenos Aires

We arrived at Montevideo via ferry, it’s just across the Rio de La Plata from Buenos Aires.  The first evening was filled with new friends, beer in incredibly large bottles, and dancing till 5am.  During the days we wandered along Montevideo’s sun drenched streets and headed down to the coast to stroll along La Rambla.  We even managed to catch a soccer game where Club Nacional won 4 to 1!

_MG_7511

More photos from Montevideo

Every great adventure must come to an end, and so did mine.  After nearly two weeks in South America I boarded a plane to head back to the United States for Thanksgiving feeling wonderfully exhausted.

You should follow me on Twitter here

Creating RESTful Services using Orleans

After the announce of the Orleans preview, there was a lot of discussion on Twitter.  One comment in particular caught my eye.

I think this is a bit of a misunderstanding of how Orleans can and should be used in production services, this blog post is an attempt to clarify and demonstrate how to build RESTful, loosely coupled services using Orleans.

Orleans Programming Model

Orleans is a runtime and programming model for building distributed systems, based on the actor model.  In the programming model there are a few key terms.

  • Grains – The Orleans term for an actor.  These are the building blocks of Orleans based services.  Every actor has a unique identity and encapsulates behavior and mutable state.  Grains are isolated from one another and can only communicate via messages.  As a developer this is the level you write your code at.
  • Silos – Every machine Orleans manages is a Silo.  A Silo contains grains, and houses the Orleans runtime which performs operations like grain instantiation and look-up.
  • Orleans Clients – clients are non-silo code which makes calls to Orleans Grains.  We’ll get back to where this should live in your architecture later.

In order to create Grains developers write code in two libraries.  GrainInterfaces.dll and Grains.dll.  The GrainInterfaces library defines a strongly-typed interface for a grain.  The Method names and Properties must all be asynchronous, and these define what types of messages can be passed in the system. All Grain Interfaces must inherit from Orleans.IGrain.

/// <summary>
/// Orleans grain communication interface IHello
/// </summary>
public interface IHello : Orleans.IGrain
{
    Task<string> SayHello();

    Task<string> SayGoodbye();
}

The Implementation of the Grains should be defined in the a separate Grains Library. All Grain implementations should implement its corresponding Grain Interface, and inherit from Orleans.GrainBase.

/// <summary>
/// Orleans grain implementation class HelloGrain.
/// </summary>
public class HelloGrain : Orleans.GrainBase, HelloWorldInterfaces.IHello
{
    Task<string> HelloWorldInterfaces.IHello.SayHello()
    {
        return Task.FromResult(" I say: Hello! " + DateTime.UtcNow.ToLongDateString());
    }

    Task<string> HelloWorldInterfaces.IHello.SayGoodbye()
    {
        return Task.FromResult("I say: Goodbye! " + DateTime.UtcNow.ToLongDateString());
    }
}

At compile time code is generated in the GrainInterfaces dll, to implement the code needed by the Silos to perform message passing, grain look-up etc… This code, by default will be under GrainInterfaces/properites/orleans.codegen.cs  There are a lot of interesting things happening in this file, I recommend taking a look if you want to understand the guts of Orleans a bit more.  Below I’ve pulled out snippets of the generated code.

Every GrainInterface defined in the library will have a corresponding Factory Class and GrainReference Class generated.  The Factory Class contains GetGrain methods.  These methods take in the unique grain identifier and creates a GrainReference.  If you look below you will see that the HelloGrainReference has corresponding SayHello and SayGoodbye methods with the same method signature as the Interface.

public class HelloFactory
{
    public static IHello GetGrain(long primaryKey)
    {
        return Cast(GrainFactoryBase.MakeGrainReferenceInternal(typeof(IHello), 1163075867, primaryKey));
    }

    public static IHello Cast(IAddressable grainRef)
    {

        return HelloReference.Cast(grainRef);
    }

    [System.SerializableAttribute()]
    [Orleans.GrainReferenceAttribute("HelloWorldInterfaces.IHello")]
    internal class HelloReference : Orleans.GrainReference, IHello, Orleans.IAddressable
    {
        public static IHello Cast(IAddressable grainRef)
        {

            return (IHello) GrainReference.CastInternal(typeof(IHello), (GrainReference gr) => { return new HelloReference(gr);}, grainRef, 1163075867);
        }

        protected internal HelloReference(GrainReference reference) :
                    base(reference)
        {
        }

        public System.Threading.Tasks.Task<string> SayHello()
        {
            return base.InvokeMethodAsync<System.String>(-1732333552, new object[] {}, TimeSpan.Zero );
        }

        public System.Threading.Tasks.Task<string> SayGoodbye()
        {
            return base.InvokeMethodAsync<System.String>(-2042227800, new object[] {}, TimeSpan.Zero );
        }
    }
}

In an Orleans Client you would send a message to the HelloGrain using the following code.

IHello grainRef = HelloFactory.GetGrain(0);
string msg = await grainRef.SayHello("Hello Orleans!");

So at this point if you are thinking, this looks like RPC, you are right. Orleans Clients and Orleans Grains communicate with one another via Remote Procedure Calls, that are defined in the GrainInterfaces. Messages are passed via TCP connections between Orleans Clients and Grains. Grain to Grain calls are also sent over a TCP connection if they are on different machines.  This is really performant, and provides a nice programming model.  As a developer you just invoke a method, you don’t care where the code actually executes, one of the benefits of Location Transparency.

Ok stay with me, Deep Breaths.  Project Orleans is not trying to re-create WCF with hard coded data contracts and tight coupling between services & clients. Personally I hate tight coupling, ask me about BLFs, the wire-struct in the original Halo games, if you want to hear an entertaining story, but I digress…

RESTful Service Architectures

Orleans is a really powerful tool to help implement the middle tier of a traditional 3-tiered architecture. The Front-End, which is an Orleans Client, The Silos running your Grains and performing application level logic, and your Persistent Storage.

On The Front-End you can define a set of RESTful APIs (or whatever other protocol you want for that matter), which then routes incoming calls to Orleans Grains to handle application specific logic, by using the Factory methods generated in GrainInterfaces dll.  In addition the Front-End can Serialize/Deserialize messages into the loosely coupled wire-level format of your choosing (JSON, Protocol Buffers, Avro, etc…).

restfulOrleansArchitecture

By structuring your services this way, you are completely encapsulating the dependency on Orleans within the service itself, while presenting a RESTful API with a loosely coupled wire struct format.  This way the clients can happily communicate with your service without fear of tight coupling or RPC.

The below code uses ASP.NET WebApi to create a Front End Http Controller that interacts with the Hello Grain.

public class HelloController : ApiController
{
    // GET api/Hello/{userId}
    public async Task<string> Get(long userId)
    {
        IHello grain = HelloFactory.GetGrain(userId);
        var response = await grain.SayHello();
        return response;
     }

     // DELETE api/Hello/{userId}
     public async Task Delete(long userId)
     {
        IHello grain = HelloFactory.GetGrain(userId);
        var response = await grain.SayGoodbye();
        return;
     }
}

While this is a contrived example, you can see how you can map your REST resources to individual grains.

This is the architectural approach Halo 4 Services took when deploying Orleans.  We built a custom, light weight, super fast front-end that supported a set of Http APIs.  Http Requests were minimally processed by the front-end and then routed to the Orleans Grains for processing.  This allowed the game code and the services to evolve independently from one another.

The above example uses ASP.NET Web API, if you want something lighter weight checkout OWIN/Project Katana.

*HelloGrain Code Samples were taken from Project “Orleans” Samples available on Codeplex, and slightly modified.

You should follow me on Twitter here

Orleans Preview & Halo 4

orleans + halo feature image

On Wednesday at Build 2014 Microsoft announced the preview release of Orleans.  Orleans is a runtime and programming model for building distributed systems, based on the actor model.  It was created by the eXtreme computing group inside Microsoft Research, and was first deployed into production by 343 Industries (my team!) as a core component of the Halo Services built in Azure.

I am beyond excited, that Orleans is now available for the rest of the development community to play with.  While I am no longer at 343 Industries and Microsoft, I still think it is one of the coolest pieces of tech I have used to date.  In addition getting Orleans and Halo into production was truly a labor of love, and a collaborative effort between the Orleans team and the Halo Services team.

In the Summer of 2011, the services team at 343 Industries began partnering with the Orleans team in Microsoft Research to design and implement our new services.  We worked side by side with the eXtreme computing group, spending afternoons pair programming, providing feedback on the programming model, and what other features we needed to ship Halo 4 Services.  Working with the eXtreme computing group was an amazing experience, they are brilliant developers and were great to work with, always open to feedback and super helpful with bug fixes and new feature requests.

Orleans was the perfect solution for the user centric nature of the Halo Services.  Because we required high throughput and low latency requests we needed state-full services.  The Location Transparency of actors provided by Orleans and the Asynchronous “single-threaded” programming model, made developing scalable, reliable, and fault tolerant services easy.  Developers working on features only had to concentrate on the feature code, not message passing, fault tolerance, concurrency issues, or distributed resource management.

By the Fall of 2011, a few months after our partnership began, Orleans was first deployed into production to replace the already existing Halo Reach presence system, in order to power the realtime Halo Waypoint Atlas experience.  The new presence service built on Orleans in Azure had parity with the old services (presence updates every 30 seconds), and the ability to push updates every second to provide realtime views of players in a match on a connected ATLAS second screen.

After proving out the architecture, Orleans, and Azure the Halo the team moved into full production mode re-writing and improving upon the existing Halo Services including Statistics Processing, Challenges, Cheating & Banning, and Title Files.

On November 6 2012, Halo 4 was released to the world, and the new Halo Services went from a couple hundred users to hundreds of thousands of users in the span of a few hours.  So if you want to see Orleans in action go play Halo 4, or checkout Halo Waypoint, both of those experiences are powered by Orleans and Azure.

Now here is the fun part, Orleans has been opened up for preview by the .NET Team.  You can go and download the SDK.  In addition a variety of samples and documentation are available on Codeplex (I know its not Github sad times, but the samples are great).  I spent Wednesday night playing around with the samples and getting them up and running.

I highly recommend checking Orleans out, and providing feedback to the .NET team.  Stay tuned to the .NET blog for more info, and feel free to ask me any questions you may have.  In addition I have a few blog posts in the work to help share some of the knowledge I gained while building the Halo 4 Services using Orleans!

References

More Halo, Azure, Orleans Goodness

You should follow me on Twitter here

Digital Ghosts

To wrap up my Digital 1: Photography class our final assignment was to produce a mini show of 5 photos with a common theme of our choosing and an artist statement, to display at the final class.

_MG_5516

Digital Ghosts

Technology is a common part of our every day lives.  We interact with it at home and in the workplace.  Advances in mobile technology no longer require people to be tethered, instead they are constantly connected, there is a invisible digital fabric happening all around us.

Technology is a huge passion of mine, as I am a software developer working on web services, so I decided to explore the idea of visualizing the pervasiveness of technology all around us using wearable technology.

All of the pieces in this collection were shot while models wore a Light Hoodie.  The Light Hoodie was created out of a strand of individually programmable full spectrum LEDs, Neopixels,  that were driven by a Trinket circuit board and powered by batteries.  The string of LEDs were sewn into the hood and inside of black hoodies.

Every piece in this collection was created using a single exposure.  The images were captured in Seattle using pre-existing light sources and the light hoodies.  No additional light sources were used to paint scenes.

_MG_5461-3

_MG_5508

_MG_5259

_MG_5316

_MG_5287