Windows 8 consumer preview and Visual Studio 11 Ultimate Beta with .NET Framework 4.5 Beta are released

Today Microsoft is releasing some of the new and awaited content as beta.

Windows 8 consumer preview:

Microsoft is taking huge risks with the Windows 8 operating system they are building. It is completely different from what we are used in the past of the Windows operating system. It’s Windows reimagined and reinvented from a solid core of Windows 7 speed and reliability. It’s an all-new touch interface. It’s a new Windows for new devices. And it’s your chance to be one of the first to try it out.

You can find the Windows 8 consumer preview here:
http://windows.microsoft.com/en-US/windows-8/consumer-preview

It sure looks like a serious adaptation compared to what we are used now.

Visual Studio 11 Beta and .NET Framework 4.5 Beta:

Of course with the release of the new Windows 8 consumer preview, the new Visual Studio Beta and .NET Framework 4.5 Beta are being released as they are the components needed to develop applications on the Windows 8 Runtime:
http://blogs.msdn.com/b/somasegar/archive/2012/02/29/visual-studio-11-beta-and-net-4-5-beta-available-now.aspx

I’m definately looking forward to install the Windows 8 consumer preview on a virtual machine and have a look at the next windows we will be working with!

Cheers,
Robbin

Everything you need to know about Windows Azure Blob Storage including permissions, signatures, concurrency, …

In my attempt to cover most of the features of the Microsoft Cloud Computing Windows Azure, I’ll be covering Windows Azure storage in the next few posts.

Why using Windows Azure storage:

  • Fault-tolerance: Windows Azure Blobs, Tables and Queues stored on Windows Azure are replicated three times in the same data center for resiliency against hardware failure. No matter which storage service you use, your data will be replicated across different fault domains to increase availability
  • Geo-replication: Windows Azure Blobs and Tables are also geo-replicated between two data centers 100s of miles apart from each other on the same continent, to provide additional data durability in the case of a major disaster, at no additional cost.
  • REST and availability: In addition to using Storage services for your applications running on Windows Azure, your data is accessible from virtually anywhere, anytime.
  • Content Delivery Network: With one-click, the Windows Azure CDN (Content Delivery Network) dramatically boosts performance by automatically caching content near your customers or users.
  • Price: It’s insanely cheap storage

The only reason you would not be interested in the Windows Azure storage platform would be if you’re called Chuck Norris …
Now if you are still reading this line it means you aren’t Chuck Norris, so let’s get on with it, as long as it is serializable.

In this post we will cover Windows Azure Blob Storage, one of the storage services provided by the Microsoft cloud computing platform. Blob storage is the simplest way to store large amounts of unstructured text or binary data such as video, audio and images, but you can save about anything in it.

The concept behind the Windows Azure Blog storage is as following:

Storing and retrieving data with Windows Azure Blob Storage

There are 3 things you need to know about to use Windows Azure Blob storage:

  1. Account: Windows Azure storage account, which is the account, containing blob, table and queue storage. The storage account blob storage can contain multiple containers.
  2. Container: blob storage container, which behaves like a folder in which we store items
  3. Blob: Binary Large Object, which is the actual item we want to store in the blob storage

Everything you need to know about Windows Azure caching service to improve performance for your cloud services

One of the features that are provided with the Windows Azure cloud computing platform, is the Windows Azure caching service, of which we will cover the basics in this post and show you how to set up the Windows Azure cache cluster.

1. Using memory caching to improve performance

By caching the information in memory and retrieving it from memory on subsequent request, we enhance performance greatly because we do not have to retrieve the information from disk every time, which is a lot slower then retrieving it from memory. Data that is being requested often and is not subject to fast change, is ideal to be cached in memory. The first retrieval the information is being retrieved from disk or database and after first retrieval we store the data in the memory cache. On the next request, we look if the data is available in cache and if it is, we serve it directly from the memory which is a lot faster.

You can get an idea of how big the difference in some cases could be by using memory retrieval instead of using disk retrieval.

Windows Azure Caching Service to improve performance with a cloud computing

Continue reading

Using Windows Azure Access Control Service to provide a single sign-on experience with popular identity providers

One of the services provided by the Windows Azure cloud computing platform is the Windows Azure Access Control Service. The Windows Azure Access Control Service is a hosted service that provides federated authentication and rules-driven, claims-based authorization

Quoted directly from MSDN:

Windows Azure Access Control Service (ACS) is a cloud-based service that provides an easy way of authenticating and authorizing users to gain access to your web applications and services while allowing the features of authentication and authorization to be factored out of your code. Instead of implementing an authentication system with user accounts that are specific to your application, you can let ACS orchestrate the authentication and much of the authorization of your users. ACS integrates with standards-based identity providers, including enterprise directories such as Active Directory, and web identities such as Windows Live ID, Google, Yahoo!, and Facebook.

Available features on the Windows Azure Access Control Service:

  • Integration with Windows Identity Foundation (WIF)
  • Out-of-the-box support for popular web identity providers including Windows Live ID, Google, Yahoo, and Facebook
  • Out-of-the-box support for Active Directory Federation Services (AD FS) 2.0
  • Support for OAuth 2.0 (draft 10), WS-Trust, and WS-Federation protocols
  • Support for the SAML 1.1, SAML 2.0, and Simple Web Token (SWT) token formats
  • Integrated and customizable Home Realm Discovery that allows users to choose their identity provider
  • An Open Data Protocol (OData)-based management service that provides programmatic access to the ACS configuration
  • A browser-based management portal that allows administrative access to the ACS configuration

Now there is quite some stuff in the list above that I have no knowledge about. And to be honest, Security and Active Directory and so forth are not really my biggest interests. Security is a very important aspect, but I prefer to leave the hardcore stuff to the security people.

However that being said, integrating a website with the ACS to authenticate users against an identity provider like Windows Live ID, Google or Facebook is quite interesting. I know many of us have written websites before and using our own custom user store or a membership provider. We are holding sensitive data which is always a possible security leak. Integrating with an identity provider like Windows Live ID, Google or Facebook provides our users to experience the single sing-on experience and we do not have to worry about storing the sensitive data.

How many times have you not registered on some random website with a username and password that you can not remember anymore later ? Would it not be easier to identify yourself to all those websites with your Google or Facebook identity. It removes you from the hassle to remember all your different users and password and it lowers the risk of your credentials being exposed since your credentials will only be stored at identity providers such as Google or Facebook.

Continue reading

Windows Azure service bus messaging with publish/subscribe pattern using topics and subscriptions

In this post we will look into the publish/subscribe pattern with the Windows Azure Service Bus, which is a messaging platform in the cloud. One of the common use to build disconnected and reliable systems is the use of queues:

Windows Azure Service Bus messaging with Publish/ Subscribe pattern with topic and subscription

The Windows Azure Service Bus allows messaging using the publish/subscribe pattern, which looks like this:

Windows Azure Service Bus messaging with Publish/ Subscribe pattern with topic and subscription

The sender sends a message to a topic and anyone who could be interested in one of those messages, could subscribe to the topic. They could subscribe to receive any message, but they can also apply filters on the incoming messages to only receive certain messages that comply to the defined filter.

Continue reading

Windows Azure Data Sync with SQL Azure database hub, synchronization group and client sync agent

Windows Azure is the public cloud computing platform Microsoft provides. With the Windows Azure platform, they also provide a set of a tools to work on the platform and to integrate with on-premise solutions.

Windows Azure Data Sync

One of integration features provided with the Windows Azure platform is Windows Azure Data Sync

Gotten for the Azure Data Sync on MSDN:

SQL Azure Data Sync is an Azure service that enables you to easily synchronize geographically disbursed SQL Server and SQL Azure databases. Data Sync provides an intuitive UI with optional tutorials that guide you through the process of creating database groupings (sync groups) that are synchronized together. You define exactly what data from each sync group is synchronized – tables, columns and rows (using row filters), as well as how frequently the synchronization jobs are performed.

SQL Azure Data Sync uses a hub-spoke topology with the hub always being a SQL Azure database.

Our company has three departments around the world and each of these work with their own database, since connecting to a central database would issue too much latency for the departments. For that reason, they all use an identical database, but it is local to the department and it contains the information related to the department. Now the departments want to share the information between departments, but they want to keep their database local for latency issues. This means we can not set up a central database to which the departments will connect to, so we need to set up a synchronization mechanism. There are multiple synchronization mechanisms like for example SQL Server replication and so forth. We want to use the Windows Azure Data Sync service to synchronize the information between the different departments, on a 1 hour base. One of the good things about Windows Azure Data Sync is that we do not have to do any coding, we do not have to set up any synchronization agent and so forth, we just have to set up the Data Synchronization in the Windows Azure portal.

To use Windows Azure or Windows Azure Services you will need an active Windows Azure subscription. Windows Azure Data Sync can be found in the portal:

Windows Azure Data Sync

Continue reading

Building and consuming REST services with ASP.NET Web API using MediaTypeFormatter and OData support

The ASP.NET Web API has been released with the ASP.NET MVC4 beta release, which was released 2 days ago (14/02/2012).

You can download the ASP.NET MVC4 beta release, that includes the Web API, here:
http://www.microsoft.com/download/en/details.aspx?id=28942

Quoted directly from microsoft website:

ASP.NET MVC 4 also includes ASP.NET Web API, a framework for building and consuming HTTP services that can reach a broad range of clients including browsers, phones, and tablets. ASP.NET Web API is great for building services that follow the REST architectural style, plus it supports RPC patterns.

If you do not know what REST stands for and why it could be of any use, you can watch this 1h18m08s video on channel9 by Aaron Skonnard: http://channel9.msdn.com/Blogs/matthijs/Why-REST-by-Aaron-Skonnard

Considering how popular REST is these days, it might be interesting to cover the new Web API in this post. Originally we saw REST services coming up through the WCF pipeline, like the WCF Data Services or using a common WCF service with the WebHttpBinding, which works on HTTP verbs like GET, POST, PUT and DELETE. However WCF was created as a messaging platform, on which we are working with SOAP messages. The entire WCF pipeline is also optimized for messaging. REST services do work a bit differently, nor do they use any SOAP. Apparently Microsoft came to the conclusion that the integration of REST was not ideal with the WCF messaging pipeline so they moved the possibility to create REST services within the ASP.NET Platform.

Continue reading

Entity Framework improving performance using compiled queries for subsequent identical queries

One of the features to improve performance with Entity Framework is the possibility to use Compiled Queries. If might for example have a LINQ query searching for products depending on the name variable that the users pass along. The query will get executed a lot, but ussually with a different parameter, depending on what product name the client wants to search for. The query construction the Entity provider creates remains the same, only the variable passed by the user changes. By using Entity Framework Compiled Queries, you can compile this query which will result that the query is being hold in a cache.

Subsequent executions of this query will get the query from the cache, instead of constructing the LINQ query to a Provider agnostic query. This saves on the overhead of constructing the query at runtime every time again. Stored procedures might be more interesting for performance increase, but that’s out of the scope of this post now.

We will work with the Product entity from the AdventureWorks database:

Entity Framework Compiled Queries

Continue reading

Entity Framework using existing poco objects with change tracking

I’ve recently been covering some Entity Framework topics like

01/02/2012: Entity Framework using self tracking entities in a multi-layered architecture
31/01/2012: Entity Framework using Partial classes to add business logic and validation to generated entities
31/01/2012: Entity Framework mapping insert, update and delete stored procedures and using stored procedures as functions
27/01/2012: Entity framework Code First with Entity Framework 4.1 and System.Data.Entity.DbContext
25/01/2012: Entity framework creating your ADO.NET entity data model first and creating your database from your model

One of the topics I wanted to cover today was using existing POCO classes with Entity Framework. However instead of covering this myself, I found a 3 part series on the ADO.NET team blog covering POCO objects with change tracking, which I will share instead:

POCO in the Entity Framework: Part 1 – The Experience
POCO in the Entity Framework : Part 2 – Complex Types, Deferred Loading and Explicit Loading
POCO in the Entity Framework : Part 3 – Change Tracking with POCO

Cheers,

Robbin

Entity Framework using self tracking entities in a multi-layered architecture

When using a multi layered architecture, you will often find that the data is provided by back-end services. The client application will get the needed data through the back-end services. That way the data logic is decoupled from the client application and can be reused for other applications which also need to access the same data.

In the .NET world the back-end services are most commonly WCF services which provide data to multiple clients who need some of the data the WCF service exposes. Client applications will request data to the WCF service which will execute a query against the data store, in our case by Entity Framework, and transfer the entities to the client application.

One of the issues that might arise in a scenario like this, is that the Entity that is transfered from the back-end WCF service can no longer be tracked for changes when it is transferred to the client. The transferred entity is no longer within the scope of our ObjectContext at our WCF service, making it impossible for the ObjectStateManager in the ObjectContext to track the changes to the Entity that was returned. A solution to this scenario is working with Self Tracking Entities, entities who have logic provided in their code to do the change tracking themselves, instead of having the change tracking being done by the ObjectStateManager in the ObjectContext.

There are a few considerations you need to keep in mind to use Self-Tracking Entities:

  • Make sure that your client project has a reference to the assembly containing the entity types. If you add only the service reference to the client project, the client project will use the WCF proxy types and not the actual self-tracking entity types. This means that you will not get the automated notification features that manage the tracking of the entities on the client. If you intentionally do not want to include the entity types, you will have to manually set change-tracking information on the client for the changes to be sent back to the service.
  • Calls to the service operation should be stateless and create a new instance of object context. We also recommend that you create object context in a using block
  • When you send the graph that was modified on the client to the service and then intend to continue working with the same graph on the client, you have to manually iterate through the graph and call the AcceptChanges method on each object to reset the change tracker. If objects in your graph contain properties with database-generated values (for example, identity or concurrency values), the Entity Framework will replace values of these properties with the database-generated values after the SaveChanges method is called. You can implement your service operation to return saved objects or a list of generated property values for the objects back to the client. The client would then need to replace the object instances or object property values with the objects or property values returned from the service operation.
  • Self-tracking entities are not enabled to perform lazy loading.
  • When you change the relationship between objects by setting the foreign key property, the reference navigation property is set to null and not synchronized to the appropriate principal entity on the client. After the graph is attached to the object context (for example, after you call the ApplyChanges method), the foreign key properties and navigation properties are synchronized.