Microsoft’s Xbox One DRM: How It Could Have Been Done

The reaction of the public to the now former DRM policies of the Xbox One has been confusing to say the least. Microsoft’s announcements of the policies in the PR nightmare that followed the initial reveal of the Xbox One were received about as well as Frankenstein’s monster to a torch-and-pitchfork-armed angry mob. The gaming community argued their points of view all through the weeks leading up to E3 2013, which spanned both sides of the argument, but left the perceived consensus that the public was against the policies of Xbox One. Despite delivering a more core gamer focused media briefing at E3 than they had in past years, Microsoft found themselves overshadowed by the announcement that Sony’s PlayStation 4 would not require an online connection at any time.

At the risk of losing ground to Sony in the upcoming console generation, Microsoft reversed its decisions on the Xbox One DRM. Not just on the 24 hour limitation, but on all of the Xbox One’s new online features. The reaction of the gaming community toward Xbox One’s policies had seemed to change in an instant from anger and alienation to the disappointment and regret that consumers would not be able to take advantage of those features that were beneficial to consumers. I can only imagine that members of the Xbox One team at Microsoft were left stunned by the reaction.

Deciphering the Viewpoint

We can interpret this series of events in a variety of ways. We could assume that the more vocal members of the community were only heard when their side of the argument was at stake. This is probably somewhat likely, but deeming it the sole reason leaves us with having learned nothing. We could also consider the spread of information on the Xbox One policies was not handled well by Microsoft. This is also probably likely, but puts Microsoft in a position of trying to improve its message over its product. I would argue that the truth is actually a combination of these two ideas, with one addition. The possibility is, and stay with me here, that perhaps the gaming community wanted some of the Xbox One’s features, but not all of them.

Could it have been done?

I’ll argue that it could have been done with full acceptance from the public. I’ll even argue that Microsoft could have been the one to do it. I’ve heard the arguments that this generation was the wrong time, and that Microsoft was ahead of itself with these policies. I disagree. Microsoft’s attitude that this was the generation to separate the Xbox One from the competition was not flawed in itself, but rather the execution was at fault. Here’s how I think Microsoft could have gotten the community on its side with Xbox One:

The Example

The Steam community has raved about the potential of using a set top box or console with the service for quite some time now. Steam recently introduced Big Picture mode, a UI designed specifically for use with a television set and controller, all while rumblings of a Steam box have been turning in the rumor mill. Anyone who says that consumers have not wanted a Steam-like experience on a console would have to be in denial. However, it’s no secret that Steam suffered its own growing pains in the beginning. In the design of the Xbox One, Microsoft could have taken advantage of what Valve had learned through the evolution of Steam. With this understood, the Xbox One’s 24 hour check-in is baffling. It is the one policy introduced by Microsoft that seemed to be panned from both sides. Why introduce a problem that the competition has already solved?

Controlling the Message

To the outside consumer looking in, it almost seems as if Microsoft believed that the restrictions involved with the Xbox One would just be glanced over and ignored, like every other Terms of Service agreement. It is also possible that the policies were still being finalized up until the console reveal and were still not well understood even internally at Microsoft. Unfortunately, what Microsoft saw fit to release in an unfinished state, the gaming community saw as a potential deal breaker.

I believe that there are circumstances under which the gaming community would have accepted the policies of Xbox One (with the exception of the 24 hour check-in). Microsoft’s own preparedness to speak to the policies and inform the consumer is one of them. The customer needed to see that Microsoft was taking the changes seriously and understood both sides of the argument. The problem with the message following the Xbox One reveal is that the it seemed to be all restriction with no benefit to the consumer. It wasn’t until nearer to E3 when these benefits, such as the family share plan, were announced and by then the damage had been done. If the benefits to the consumer outweigh the restrictions of the policies, then the community will see them ultimately as a win for them.

An Olive Branch

The ultimate benefit to consumers in an all digital gaming platform is a reduction in the price of games. Since publishers and developers gain the previously lost sales due to used games, they can afford to offer new games at a reduced price. This argument is made in favor of the Xbox One, but seems to be arising from using Steam as an example for the all digital model. The problem is that the digital game sales on Xbox 360 have developed the reputation of maintaining the launch price for games long after other retailers have reduced the price for disc based versions. This begs an important question regarding Steam sales. Does Steam provide reduced pricing because they have competition for digital sales on PC, or because there is no space for used sales?

I believe that had Microsoft made a commitment to Steam-like sales on digital games through Xbox 360 prior to announcing the new policies of Xbox One, the gaming community would have been much more accepting of the lack of trading or used game sales. There is no competition for Microsoft in digital game sales; you can only purchase digital games for Xbox from Xbox Live. Microsoft should have recognized this as a concern for the consumer and offered sales on the current generation as an olive branch.


An argument that continues to be thrown around by those in support of Xbox One’s DRM is that it is not the fault of publishers and developers to want to not lose used game sales to GameStop and other re-sellers. I think it’s important to understand that despite the events surrounding the Xbox One, the gaming community does not hold an opinion against that idea. Gamers want to support the industry and the creators, and we understand that used game sales are ultimately a detriment to that. We can remove used games from the market entirely AND do it with gamer support, but the platform needs to remain as accessible as it is today.

It’s easy to point out all of the ways that the Xbox One announcement went wrong after the fact. It is much more difficult to anticipate these problems ahead of time. These features will be available on consoles sooner than later, probably even in the upcoming generation, despite how things have shaken out so far.

Thanks for reading. Please leave a comment or get in touch with me on Twitter.

An in depth look at ASP.NET MVC’s TempData

Ask around about using TempData in ASP.NET MVC and you’ll get plenty of opinions, but few answers to real questions. “It’s contrary to the MVC pattern,” the more respectful responses will say, others will simply “answer” your question with, “Just don’t use it.” It can be a frustrating wall to encounter when you are simply looking for information to evaluate it as an option for your application or when you need to support code that already uses it.

Few technologies actually deserve that “do not use” label for all situations. The amount of technology elitism that is spread around forums and the internet is staggering. As developers, we have all found small applications for less commonly used technology, even if it is just that sparingly used script or internal web site. There are pros and cons to every technology, and any developer worth their weight in dirt should be able to recognize that.

So, OK, end rant. The point is that despite internet opinion, there are some valid uses for TempData. A wizard based process comes to mind. There are times when you just don’t want to post unused data back to the server every time. It’s a cumbersome process and leaves you responsible for validating what comes back from the client every time, whether you’ve already validated it or not.

What is TempData?

TempData is essentially managed session storage. ASP.NET persists your objects on the session between a single request and then removes them afterward, unless you say otherwise. See the code snippet below for a quick sample:

public ActionResult WizardStep1(WizardStep1Model model)
     if (ModelState.IsValid)
          TempData["Step1"] = model;
          // OR TempData.Add("Step1", model); works fine as well.
          return RedirectToAction("WizardStep2");

     return View(model);

In the WizardStep1 action, we save the model from step 1 to TempData if it is valid, and then we move on to step 2. TempData is essentially a dictionary with a key of string and a value of object, so we can use most of the methods that we would normally use for a Dictionary.

Retreving from and keeping TempData

Let’s consider a three step wizard process. Step 1 is as shown above. We start getting data from the user and move on to Step 2. During Step 2, we gather even more data and move on to step 3 where we (finally!) save everything to the database.

In order to get our data from steps 1 through 3 (cue Brian McKnight), we will need some mechanism to keep the data from step 1 while also adding the data from step 2, since it is only persisted for one request. This is where the Keep method comes in.

public ActionResult WizardStep2(WizardStep2Model model)
     // We use the same key from when we set the data to get it back...
     WizardStep1Model step1 = (WizardStep1Model)TempData["Step1"];
     // BUT, we can also use ContainsKey or TryGetValue in cases
     // where the data might be null...
     if (step1 == null)
          // The user may have skipped step 1, so we go back.	
          // We could save the step 2 data to TempData if we wanted to.
          return RedirectToAction("WizardStep1");

     // We need to tell TempData to keep the Step 1 model for one more request.<br />	
     if (ModelState.IsValid)
          // And, if Step 2 is valid, we'll save it as well.	
          TempData["Step2"] = model;	
          return RedirectToAction("WizardStep3");

     return View(model);

We’ll need to keep the data for Step 1 whether the Step 2 model is valid or not, so we do that outside of the if block for ModelState.IsValid.

It is important to note that TempData only persists for one request. The following GET request following the RedirectToAction qualifies as a request. This means that in the get method for each step, we need to keep the TempData so it is there when we post back. Here is how this would work on the redirect to Step 3.

public ActionResult WizardStep3()
     return View();

Notice on the call to TempData.Keep in WizardStep2 that we provided the key that we used to store the Step 1 data to TempData. That marks only that object for retention in TempData, meaning that all others that are not marked will be disposed. In WizardStep3, we have both Step 1 and Step 2’s data in TempData. We can call TempData.Keep with no parameters to mark all objects in TempData for retention.

When we post to WizardStep3, we can retreive both Step 1 and Step 2’s data from TempData, using a similar method to Step 2. There we can save them all to a database at the same time using Entity Framework or any other database persistence.

Why TempData over Session?

You can see from the example above that once the user is finished with the wizard process, that we will no longer need the data from those forms. If you were to save this data to the session, it would stay there until you removed it manually, abandoned the session, or the session expired. Like TempData, the session has its benefits, but in this case it is easier to tell TempData on the off chance that we need to keep the data, rather than to always tell the session to get rid of it.


The truth is that many of the complaints about TempData are valid. It is a loophole outside of the MVC pattern and it is a volatile approach to persisting data. The problem is that despite the views of many opinionated developers, no one pattern is the solution to every problem. If anything else, I hope you have the information to decide whether using TempData is right for you.

Thanks for reading. Feel free to leave any questions or comments.

Managing an SQLite database on an Android Virtual Device

Mobile database development is often an unfamiliar when coming from the world of web development. Many database administrators are used to managing their databases through a management tool such as SQL Server Management Studio, phpMyAdmin or MySQL Workbench. They can feel a bit out of place when they find themselves creating SQLite databases by writing queries in Java or Objective-C code. Sometimes having a visual representation of the data just helps with understanding a system and can speed up development.

So, how can I directly manage an SQLite file?

When running an Android application on an actual device, we always have access to the file system over USB, so we can just download the SQLite database to our machine and manage it from there. In addition, there are many SQLite browser applications available for Android, such as SQLite Manager and SQLite Viewer, that allow you to browse the contents of an SQLite database directly from your device.

Note: The SQLite database for your application will always be found in the data -> data -> {application package name} -> databases folder on the Android file system.

The problem is that many developers do not develop on an actual Android device, at least not in the early stages. Instead, they run their application on the Android Virtual Device through the Android Developer Tools or Eclipse. While it is a bit more complicated, you can also download your application’s SQLite database from a virtual device. Here’s how it’s done:

Before following the steps below, make sure that your Android Virtual Device is running and connected to Android Developer Tools. Also, make sure that your application has successfully created an SQLite database on the file system.

1. Open the DDMS perspective in ADT by clicking the DDMS button in the top right hand corner of the application. You may only see the “Java” and “Debug” perspectives initially. If that is the case, you will need to click the open perspective button and choose “Other…” from the menu. From the next screen, choose DDMS and click OK.


The “Open Perspective” button in ADT.

The "Open Perspective" dialog in ADT with DDMS highlighted.

The “Open Perspective” dialog in ADT with DDMS highlighted.

2. You should see your AVD information in the left hand pane. In the right hand pane, choose the File Explorer tab.

3. From the file tree, expand to the data -> data -> {application package name} -> databases folder where you should see your SQLite database.

4. Select the database file and click the “Pull a file from the device” button in the top right hand corner of the File Explorer pane. A save dialog should appear.

The DDMS File Browser with the "Pull a file from the device" button shown.

The DDMS File Browser with the “Pull a file from the device” button shown.

5. Choose a location to save the file on your local machine.

Once the files is on your local machine, you can use any SQLite management tool to browse and modify your database. SQLite Explorer is a great free, open-source option for those looking for a recommendation.

There is also a plug-in for Eclipse available that will handle the process of downloading and opening the database for you. It allows you to manage the database directly from Eclipse. You can download it here.

If you need to make a change to the database, you can do it through your management software, and use the “Push a file onto the device” button from the File Explorer to push the updated database to the virtual device.

This is a great method of seeing exactly what is going on with your database every step of the way. While unit testing is still a crucial part of making sure your application functions correctly, visualizing the data can help to quickly resolve obvious bugs.

Thanks for reading. Feel free to leave any questions or comments.