Introduction to the Windows Phone platform


Shared Windows Core

Windows 8 and Windows Phone 8 Share Many Components At The Operating System Level

Shared Core means

• OS components such as the kernel, networking, graphics support, file system and multimedia are the same on both Windows 8 and Windows Phone 8

• Hardware manufacturers work with the same driver model on both platforms

• Windows Phone gets the support for multi-core and other hardware features that Windows has had for years

• These solid, common foundations makes it easier to extend the Windows Phone platform into the future

It doesn’t mean

• Windows 8 and Windows Phone 8 developers work to exactly the same APIs

• (though you will see more commonality as new features are introduced to both platforms in the future)

Windows Phone 8 supports

• Managed app dev using the WP7.1, WP8.0 .NET and WinPRT APIs

• Native app dev using WinPRT and Win32

• Games dev using the WP7.1 XNA framework

• Games dev using Direct3D or DirectX

clip_image002

Since we are going to look at managed app dev , let me point you to the classes you can use for WP 8 development. There are basically two sets of API s

clip_image004

Now which one should you choose ?

Windows phone .NET API

This has full support windows phone 7.1 classes. So if you code using these supported classes, your app will run on WP 7 two. If you have existing app that runs for WP 7.1,you do not have to re-develop from scratch, you can reuse the existing code base. Contains classes and types from the System and Microsoft.Phone namespaces

• There have been new classes added for Windows Phone 8.0, for example

• Microsoft.Phone.Wallet

• Microsoft.Phone.Tasks.ShareMediaTask

• Microsoft.Phone.Tasks.MapsTask

• Microsoft.Phone.Storage.ExternalStorage

• Microsoft.Phone.Networking.Voip

Windows Phone Runtime API

Windows Phone Runtime is a subset of the full WinRT, plus some phone-specific additions. Windows (Phone) Runtime is implemented in C++ and projected into C#, VB.NET, and C++.

You would be using this library typically if you are developing for Windows Phone 8 and Windows RT since the Windows PRT API and the Win RT API has a lot of shared codes.clip_image006

So, it is mostly a matter of choice which class you choose since equivalent classes exist in both the APIs

.NET API

Windows Phone Runtime API

System.IO.IsolatedStorage

Windows.Storage

System.NET.Sockets

Windows.Networking.Sockets

System.Threading.ThreadPool

Windows.System.Threading.ThreadPool

Microsoft.Devices.Sensors

Windows.Devices.Sensors

System.Device.Location

Windows.Devices.GeoLocation

Just to let you know, unlike Windows 8HTML/JS app development isn’t possible for WP 8.But we can use HTML 5 hosted in webview to make cross platform apps.

Advertisements

Exception Handling in Task Based Parallel programming.


As you had learnt in the previous posts, Tasks are executed asynchronously and parallel. Now what happens when there is an exception? How and where is it thrown?

This blog post answers all the above questions

1. What happens when an exception occurs?

When a task causes an exception, it first terminates the Task .E is caught,saved as a part of an AggregateException ae and stored in object’s Exception property

.Now, we know, that I synchronous programming, the exception propagates upward to the caller from the callee.

But in synchronous programming, the scenario is different. Here if the exception is unhandled, it is re thrown at a later time.

2. What do you mean by ‘Later Time’?

The exception AE is rethrown when it encounters .result,.Wait or WaitAll Task functions.The exception if not handled or touched, is rethrown when the task object is garbage collected.

3. What is the proper way to handle exceptions ?

   1: Task<int> T_1 = Task<int>.Factory.StartNew(() => { 

   2:                 //throw new ArithmeticException();

   3:                 //do something awesome that generates an exception

   4:                 return 1;

   5:             });

   6:             try

   7:             {

   8:  

   9:                     int a=    T_1.Result;

  10:             }

  11:             catch (AggregateException AE)

  12:             {

  13:                 AE = AE.Flatten();

  14:                 foreach (Exception ae in AE.InnerExceptions)

  15:                 {

  16:                     //handle each Exception ae

  17:                 }

  18:  

  19:             }

4. What is AggregateException ?

.NET throws all the exception wrapped in a single Exception called Aggregate Exception. Now as we know, that each task can create many subtasks, which again can lead to exceptions. So an aggregate Exception returned by a task, contains all the exception returned by the subtasks.

Consider this scenario, Task A creates Task B and Task C, and in doing so Task A is considered the parent of both Task B and Task C. These relationships come into play primarily in regard to lifetimes. A Task isn’t considered completed until all its children have completed, so if you used Wait on Task A, that instance of Wait wouldn’t return until both B and C had also completed.  So what happens if both throws an exception. Then both the exceptions are wrapped in an aggregate Exception and rethrown.

5 .How do we access the individual exceptions ?

We access it through the Aggregrate Exceptions class. But it might happen that the child of the parent’s task, spawns more child tasks which throws exception. Hence we can see a tree forming,whose leaves are exceptions.

By default, AggregateExceptions retains this hierarchical structure, which can be helpful when debugging because the hierarchical structure of the contained aggregates will likely correspond to the structure of the code that threw those exceptions. However, this can also make aggregates more difficult to work with in some cases. To account for that, the Flatten method removes the layers of contained aggregates by creating a new AggregateException that contains the non-AggregateExceptions from the whole hierarchy.

6. What is observing Exceptions ?

Now, as we said unhandled Exceptions will be rethrown during garbage collection. To avoid that ,there are 4 methods you can “observe” the exceptions or in other words allow the exceptions to be rethrown so that you can handle it.

Calling .Wait, .Result or .WaitAll on tasks will allow the task to re throw the exceptions. [task.waitany does not throw any exceptions)

Access tasks’ Exception property after completion of task.

Subscribe to TaskScheduler.UnobservedTaskException

Accessing the exception property after completion of the task will not re throw the exception. If the exception property is not null, then you would know that an exception was generated. Even if you don’t handle it, the compiler won’t rethrow during garbage collection as you have observed the exception

Difference between continuewhenAll and WaitAll


This is a part of the blog series , “Asynchronous Programming Using DotNet” .Please visit this page to see the “index” of all the posts I wrote on this series.

So what exactly is the difference between continuewhenAll and WaitAll method ? When and where should you use them?

Consider the following piece of code

   1: int start = Environment.TickCount;

   2:

   3: Task T1 = Task.Factory.StartNew(() =>

   4:

   5: {

   6:

   7: Thread.Sleep(3000);

   8:

   9: });

  10:

  11: Task T2 = Task.Factory.StartNew(() =>

  12:

  13: {

  14:

  15: Thread.Sleep(3000);

  16:

  17: });

  18:

  19: Task T3 = Task.Factory.StartNew(() =>

  20:

  21: {

  22:

  23: Thread.Sleep(3000);

  24:

  25: });

  26:

  27: Task[] T_Arr = { T1, T2,T3 };

  28:

  29: Task.WaitAll(T_Arr);

  30:

  31: int stop = Environment.TickCount;

  32:

  33: textBox1.Text =( stop - start).ToString();

The time of run is a little over 3000 miliseconds.

clip_image002

As you can see, Task.waitAll takes an array as an argument and waits till all the tasks are complete. The tasks itself are asynchronous in nature. So the period of waiting is roughly equal to the maximum of (Time taken by T1, Time Taken by T2, and Time Taken by T3).During the waiting period the UI freezes ,as the UI thread is busy waiting!! So basically even though the tasks are done asynchronously, the waiting is synchronous.

So what is the solution ?

The new code is

   1: TaskFactory tf = new TaskFactory();

   2:

   3: tf.ContinueWhenAll(T_Arr, (a) =>

   4:

   5: {

   6:

   7: int stop = Environment.TickCount;

   8:

   9: textBox1.Text = (stop - start).ToString();

  10:

  11: }, new CancellationToken(), TaskContinuationOptions.None, TaskScheduler.FromCurrentSynchronizationContext());

Continuewhen all typically starts a new thread.Continuewhenall delegate will be called once the tasks are finished and the contents of the continuewhen all will run on a new thread.So there is synchronous waiting.Here in our case ,since we are updating the UI, the task will run on the UI thread,but while waiting for the output it won’t be blocking the UI thread, Hence your application remains responsive, even if your tasks are not complete.

Asynchronous Programming doesn’t imply multithreading!


This is a part of the blog series , “Asynchronous Programming Using DotNet” .Please visit this page to see the “index” of all the posts I wrote on this series.

Few days ago,while talking to some of my friends,I realized a lot of people harbor a misconception that implementing asynchronous models makes their program multithreaded and parallel .I will try and debunk this myth in this post.

People often think asynchronous work equal multithreading. Which is absolutely not true?

In this post we are going to differentiate asynchronousness and parallelism.

Now let’s take an example, say we are consuming a web service or using the web Client which involves a call over the network and significant waiting time. The first thing that comes to our mind when we do such a work is that we must keep the UI responsive while our program waits. A lot of people think that to keep the UI responsive there HAS to be a separate thread which is running and taking care of our web request.

But the truth is different. The request only needs two threads at a specific point. It’s also possible to do the work in ONE thread without blocking the thread or freezing up the UI.

I will back up my claims logically

1. Most of the time is spent waiting for a response from the server. When a client machine sends a request, it has to wait for the reply. So what would a thread do when the request is in progress? If we use synchronous programming model, it will put the thread to sleep as there is no work for the thread to do.

2. Even while the client computer, is sending or receiving data /request to the server we do not need a different thread as this is taken care of by the network hardware. The network hardware inside the computer is quite capable to handle stuffs related to such tasks. When we actually send the data, the device driver for the NIC, programs the hardware pointing it to the data to send. The network hardware in most cases is quite capable of fetching the actual data to be sent directly from the main memory. So the driver only needs to inform the location of the data in the memory. So the work of CPU is only limited to telling the NIC where to fetch data from and where to send the data. The time taken to do that is quite negligible compared to the time taken to send the data.

3. The above case is true for most Input/output tasks. It’s the same with disk access. The CPU just needs to tell the disk controller as to which data needs to be accessed. The time is miniscule compared to the time disk controller takes to move its parts and access the data. The time the CPU takes to issue the instruction is very small compared to the above.

Practically most windows applications spend a lot of time waiting .Multithreading won’t make the “waiting” faster. It’s the asynchronous ness that would keep the program responsive. Multithreading is important when there is a specialized background that is running. That is when a multicore benefit will be more pronounced.

The main point of asynchronous code is mostly to reduce the number of threads we are using. It does that by taking away threads from code that may have blocked it. It will only be allowed to consume threads at only those moments when it actually has constructive work (useful CPU work) to do. This is what leaves the UI thread free to respond to the user input

Asynchronous programming Deep Dive part 2


 

Picking up from where I left off , I had promised to show you the code behind “both” button .The initial code is

   1: WebResponse Response = getResponse("http://feeds.bbci.co.uk/news/world/rss.xml?edition=uk#");

   2:             int bbc =Int32.Parse( processBBCNews(Response).ToString());

   3:      

   4:  WebResponse Response1 = getResponse("https://news.google.com/news/feeds?ned=in&topic=h&output=rss");

   5:             int gn =Int32.Parse(  processGoogleNews(Response1).ToString());

   6:                      

   7:             int time_stop = System.Environment.TickCount;

   8:             //Computing the difference between the number of posts

   9:             textBox3.Text = (Math.Abs(bbc - gn)).ToString();

Now, the reduction in time can be felt the most in this part. Because we will designate 2 different threads to work on the 2 services.

Now to do this,if we blindly follow the previous method, we will end up with this .

   1: int time_start = System.Environment.TickCount;

   2:             Task<WebResponse> T1 = Task<WebResponse>.Factory.StartNew(()=>{

   3:             WebResponse Response = getResponse("http://feeds.bbci.co.uk/news/world/rss.xml?edition=uk#");

   4:                 return Response;

   5:                 });

   6:            

   7:             Task<WebResponse> T2 = Task<WebResponse>.Factory.StartNew(()=>{

   8:             WebResponse Response = getResponse("https://news.google.com/news/feeds?ned=in&topic=h&output=rss");

   9:             

  10:             return Response;

  11:              });

  12:            Task<int> T1_next = T1.ContinueWith((antecedent) =>

  13:             {

  14:                 int gn = Int32.Parse(processGoogleNews(antecedent.Result).ToString());

  15:                 return gn;

  16:             },

  17:             TaskScheduler.FromCurrentSynchronizationContext());

  18:            Task<int> T2_next = T2.ContinueWith((antecedent) =>

  19:             {

  20:                 int bbc = Int32.Parse(processBBCNews(antecedent.Result).ToString());

  21:                 return bbc;

  22:             },

  23:                 TaskScheduler.FromCurrentSynchronizationContext()

  24:             );

  25:            

  26:               //Computing the difference between the number of posts

  27:             textBox3.Text = (Math.Abs(T1_next.Result - T2_next.Result)).ToString();

  28:           

  29:             int time_stop = System.Environment.TickCount;

  30:             label3.Text = (time_stop - time_start).ToString(); 

But this will not run. Infact this will freeze your application indefinitely? Why so?

The reason being, T1_next.Result executes before , T1 is actually finished . Now this is straight forward as we know that <task_name>.Result implicitly calls Task.Wait() and hence, it will freeze the UI thread and wait for the results.

But at the same time after T1 finishes, T1_next will be invoked, which is waiting at the local queue to be run on the UI thread. While the UI thread isn’t finished because it’s waiting for T1_next. Thus we have a deadlock. Always watch out for such deadlock, when you are dealing with synchronization contexts. So we need to do away with the waiting .

We need to change our code a bit

 

   1: int time_start = System.Environment.TickCount;

   2:             Task<ListBox> T1 = Task<ListBox>.Factory.StartNew(()=>{

   3:             WebResponse Response = getResponse("http://feeds.bbci.co.uk/news/world/rss.xml?edition=uk#");

   4:                 //Instead of returning the response we send the response to another processing function which does not update the UI thread

   5:             ListBox bbc = processBBCNews_new(Response);

   6:             Thread.Sleep(2000);

   7:             //The processing function returns a listbox 

   8:                 return bbc;

   9:                 });

  10:            

  11:             Task<ListBox> T2 = Task<ListBox>.Factory.StartNew(()=>{

  12:             WebResponse Response = getResponse("https://news.google.com/news/feeds?ned=in&topic=h&output=rss");

  13:  

  14:             ListBox gn = processGoogleNews_new(Response);

  15:             return gn;

  16:              });

  17:             //The trick is to wait for both T1 and T2 without blocking the UI thread                      

  18:             Task[] arr = { T1, T2 };

  19:             TaskFactory tf = new TaskFactory();

  20:             //Continue when all acts like a call back ,and does not fire wait on the UI thread,hence the UI is still responsive

  21:             tf.ContinueWhenAll(arr, (a) => {

  22:                 int count = 0, count1 = 0;

  23:                 

  24:                 foreach (string item in T1.Result.Items)

  25:                 {

  26:                     //The listbox returned is iterated to add those values to the UI listbox 

  27:                     count++;

  28:                     listBox1.Items.Add(item);

  29:                 }

  30:                 foreach (string item in T2.Result.Items)

  31:                 {

  32:                     count1++;

  33:                     listBox2.Items.Add(item);

  34:                 }

  35:                 //Computing the difference between the number of posts

  36:                 textBox3.Text = (Math.Abs(count1 - count)).ToString();

  37:                 int time_stop = System.Environment.TickCount;

  38:                 label3.Text = (time_stop - time_start).ToString();

  39:             

  40:             },new CancellationToken(),TaskContinuationOptions.None,TaskScheduler.FromCurrentSynchronizationContext());

If you go through the code,you would see I have used a continueWhenAll method.I have explained the reason.If you want to know more visit .

As I had promised, I have uploaded the entire app code here.

Asynchronous programming Deep Dive part 1


In my previous ,post I had introduced the concept of the wait, wait all, wait any. Lets see a few of these in action. More over lets learn how to efficiently convert our own synchronous code to take the benefit asynchronous and parallel methodologies.

The app

clip_image001
We have a simple app which gets RSS feed of news from Google and BBC . I do not want to go into the

implementation details,or parsing of XML.If you want to see that, you can just download the code.

clip_image003

So Lets click “BBC News” and then “Google” Now, in the figure above you can see 2 numbers have appeared below the textbox. The numbers indicate the amount of milliseconds it took for the respective servers to reply with a response. As you can see , Google servers have a quite high latency of more than a second. The numbers in the textbox indicate the number of headlines.

Now the “Both” Button, synchronously fires both the requests one after the other .The typical time it takes is to the tune of almost 3000 miliseconds.

Now lets see the code. The code or the BBC button is as follows .

int time_start = System.Environment.TickCount;

String url = “http://feeds.bbci.co.uk/news/world/rss.xml?edition=uk#&#8221;;

WebResponse Response = getResponse(url);

textBox1.Text = processBBCNews(Response).ToString();

int time_stop = System.Environment.TickCount;

label1.Text = (time_stop – time_start).ToString();

Now, the code for Google News button is exactly same only the URL is different and the updation labels are different.

getResponse is an user defined function that sends the Http Get request and receives the response.processBBCNews extracts the title from the XML and puts each one into the List.

   1: private int processBBCNews(WebResponse Response)

   2:         {

   3:             listBox1.Items.Clear();

   4:             XmlReader rdr = XmlReader.Create(new System.IO.StreamReader(Response.GetResponseStream()));

   5:             int count = 0;

   6:             while (rdr.Read())

   7:             {

   8:  

   9:  

  10:                 if (rdr.Name.Equals("title"))

  11:                 {

  12:                     string a = rdr.ReadElementContentAsString();

  13:                     listBox1.Items.Add(a);

  14:                     count++;

  15:                 }

  16:             }

  17:             return count;

  18:         }

.In the code behind for the “both” button, we do exactly the something. We first send the HTTP get request for the Google News, then receive the response and process it. Then we fire off another GET Http request for BBC news and get the response. The total time taken is noted.

Now to convert synchronous code to asynchronous code

   1: private void button1_Click(object sender, EventArgs e)

   2:  

   3: {

   4:  

   5: int time_start = System.Environment.TickCount;

   6:  

   7: String url = "http://feeds.bbci.co.uk/news/world/rss.xml?edition=uk#";

   8:  

   9: //We have specified the return type of the code that will beexecuted in the task

  10:  

  11: //This allows us to use the Task as a function and implement encapsulation,

  12:  

  13: //hence variables thar are used inside do not need to be declared outside

  14:  

  15: Task<WebResponse> T = Task<WebResponse>.Factory.StartNew(() =>

  16:  

  17: {

  18:  

  19: WebResponse Response = getResponse(url);

  20:  

  21: return Response;

  22:  

  23: });

//As we know, that processing of results can only occur when the Response is

//recieved ,hence instead of using

//callbacks we are using continue with.In my series,I will you to think in terms

//of continuations and not callbacks

//Here antecedent refers to the task thats just finished .

   1: Task T2 = T.ContinueWith((antecedent) =>

   2:  

   3: {

   4:  

   5: textBox1.Text = processBBCNews(antecedent.Result).ToString();

   6:  

   7: int time_stop = System.Environment.TickCount;

   8:  

   9: label1.Text = (time_stop - time_start).ToString();

  10:  

  11: },

  12:  

  13: //As we already know the View or the UI can only be updated from the UI thread,hence 

  14:  

  15: //we are executing the task in the same thread as the Ui thread

  16:  

  17: TaskScheduler.FromCurrentSynchronizationContext()

  18:  

  19: );

  20:  

  21: }

  22:  

All the explanations are given through the comments .I will share my code in the next blog post where we look at the code behind of the “Both” button.

Shared Variables while using Task Parallel Library


Word of caution while using shared variables.

If you see my previous example, you will see that my calculate PI returns a variable j which is declared local to the function and not the task. So that variable is a shared variable between the UI thread and the Task T thread.

But there are certain things we need to keep in mind while using shared variables since it might lead to Race conditions. Let me illustrate that by a diagram

clip_image002

clip_image004
Look carefully at the below diagram. We have two tasks T and T_new. T operates on a value called variable which is shared between the UI Thread, T_new Thread and itself. So basically what is shown, is that first T operates and populates the value of the “variable”. Then AFTER it finishes, Task T_new reads the value of the variable and computes “Result”. Now the problem here is there is absolutely no guarantee that T will finish before T_New. It might happen that when T_new reads “variable” before T updates its value!!!!

This is a race condition, since both the tasks race to access the variable! We cannot predict beforehand , which task will access the “variable” first!

Sometimes, race condition error is difficult to catch as in Debug mode it may not appear. In Release mode, NET optimizes a lot of routines hence the race condition error may appear then. It is mostly platform specific and hence very difficult to catch.

Now coming back to our calculate PI program, it is free from any Race Condition. The reason being the variable j is first accessed in the Tast T thread. Then in the next thread it is accessed again. But since we have the keyword “continue with”, it can only work when task T has finished updating “j”, hence there isn’t any race condition.

So what do we when we need to wait for a Task to finish before starting another task ?

We use the wait keyword. More on that in the next section