Asynchronous Programming doesn’t imply multithreading!

This is a part of the blog series , “Asynchronous Programming Using DotNet” .Please visit this page to see the “index” of all the posts I wrote on this series.

Few days ago,while talking to some of my friends,I realized a lot of people harbor a misconception that implementing asynchronous models makes their program multithreaded and parallel .I will try and debunk this myth in this post.

People often think asynchronous work equal multithreading. Which is absolutely not true?

In this post we are going to differentiate asynchronousness and parallelism.

Now let’s take an example, say we are consuming a web service or using the web Client which involves a call over the network and significant waiting time. The first thing that comes to our mind when we do such a work is that we must keep the UI responsive while our program waits. A lot of people think that to keep the UI responsive there HAS to be a separate thread which is running and taking care of our web request.

But the truth is different. The request only needs two threads at a specific point. It’s also possible to do the work in ONE thread without blocking the thread or freezing up the UI.

I will back up my claims logically

1. Most of the time is spent waiting for a response from the server. When a client machine sends a request, it has to wait for the reply. So what would a thread do when the request is in progress? If we use synchronous programming model, it will put the thread to sleep as there is no work for the thread to do.

2. Even while the client computer, is sending or receiving data /request to the server we do not need a different thread as this is taken care of by the network hardware. The network hardware inside the computer is quite capable to handle stuffs related to such tasks. When we actually send the data, the device driver for the NIC, programs the hardware pointing it to the data to send. The network hardware in most cases is quite capable of fetching the actual data to be sent directly from the main memory. So the driver only needs to inform the location of the data in the memory. So the work of CPU is only limited to telling the NIC where to fetch data from and where to send the data. The time taken to do that is quite negligible compared to the time taken to send the data.

3. The above case is true for most Input/output tasks. It’s the same with disk access. The CPU just needs to tell the disk controller as to which data needs to be accessed. The time is miniscule compared to the time disk controller takes to move its parts and access the data. The time the CPU takes to issue the instruction is very small compared to the above.

Practically most windows applications spend a lot of time waiting .Multithreading won’t make the “waiting” faster. It’s the asynchronous ness that would keep the program responsive. Multithreading is important when there is a specialized background that is running. That is when a multicore benefit will be more pronounced.

The main point of asynchronous code is mostly to reduce the number of threads we are using. It does that by taking away threads from code that may have blocked it. It will only be allowed to consume threads at only those moments when it actually has constructive work (useful CPU work) to do. This is what leaves the UI thread free to respond to the user input


Asynchronous Programming Series

Here are the consolidated links to the various posts that I have written on Asynchronous Programming.

  1. Lambda Expressions and closures
  2. Lambda Expressions and closures Part 2
  3. What do we mean by Async and Parallel Programming and why do we need it.?
  4. Asynchronous programming doesn’t necessarily imply multithreading.
  5. Introduction to Tasks and Task Parallel Library
  6. Implementing Tasks Part 2
  7. Dealing with Shared Variables while using Task Parallel Library
  8. Asynchronous programming Deep Dive part 1
  9. Asynchronous programming Deep Dive part 2
  10. Different ways of waiting asynchronously
  11. Exception handling in Task Parallel Library

The posts are written from a beginners perspective right to advanced concepts .So the series starts with prerequisites of understanding the TPL i.e lambdas and closures. With the advent of Windows 8, TPL has added various features and also increased in importance.

Hope you find the tutorials useful.The code samples are available here

Asynchronous programming Deep Dive part 2


Picking up from where I left off , I had promised to show you the code behind “both” button .The initial code is

   1: WebResponse Response = getResponse("");

   2:             int bbc =Int32.Parse( processBBCNews(Response).ToString());


   4:  WebResponse Response1 = getResponse("");

   5:             int gn =Int32.Parse(  processGoogleNews(Response1).ToString());


   7:             int time_stop = System.Environment.TickCount;

   8:             //Computing the difference between the number of posts

   9:             textBox3.Text = (Math.Abs(bbc - gn)).ToString();

Now, the reduction in time can be felt the most in this part. Because we will designate 2 different threads to work on the 2 services.

Now to do this,if we blindly follow the previous method, we will end up with this .

   1: int time_start = System.Environment.TickCount;

   2:             Task<WebResponse> T1 = Task<WebResponse>.Factory.StartNew(()=>{

   3:             WebResponse Response = getResponse("");

   4:                 return Response;

   5:                 });


   7:             Task<WebResponse> T2 = Task<WebResponse>.Factory.StartNew(()=>{

   8:             WebResponse Response = getResponse("");


  10:             return Response;

  11:              });

  12:            Task<int> T1_next = T1.ContinueWith((antecedent) =>

  13:             {

  14:                 int gn = Int32.Parse(processGoogleNews(antecedent.Result).ToString());

  15:                 return gn;

  16:             },

  17:             TaskScheduler.FromCurrentSynchronizationContext());

  18:            Task<int> T2_next = T2.ContinueWith((antecedent) =>

  19:             {

  20:                 int bbc = Int32.Parse(processBBCNews(antecedent.Result).ToString());

  21:                 return bbc;

  22:             },

  23:                 TaskScheduler.FromCurrentSynchronizationContext()

  24:             );


  26:               //Computing the difference between the number of posts

  27:             textBox3.Text = (Math.Abs(T1_next.Result - T2_next.Result)).ToString();


  29:             int time_stop = System.Environment.TickCount;

  30:             label3.Text = (time_stop - time_start).ToString(); 

But this will not run. Infact this will freeze your application indefinitely? Why so?

The reason being, T1_next.Result executes before , T1 is actually finished . Now this is straight forward as we know that <task_name>.Result implicitly calls Task.Wait() and hence, it will freeze the UI thread and wait for the results.

But at the same time after T1 finishes, T1_next will be invoked, which is waiting at the local queue to be run on the UI thread. While the UI thread isn’t finished because it’s waiting for T1_next. Thus we have a deadlock. Always watch out for such deadlock, when you are dealing with synchronization contexts. So we need to do away with the waiting .

We need to change our code a bit


   1: int time_start = System.Environment.TickCount;

   2:             Task<ListBox> T1 = Task<ListBox>.Factory.StartNew(()=>{

   3:             WebResponse Response = getResponse("");

   4:                 //Instead of returning the response we send the response to another processing function which does not update the UI thread

   5:             ListBox bbc = processBBCNews_new(Response);

   6:             Thread.Sleep(2000);

   7:             //The processing function returns a listbox 

   8:                 return bbc;

   9:                 });


  11:             Task<ListBox> T2 = Task<ListBox>.Factory.StartNew(()=>{

  12:             WebResponse Response = getResponse("");


  14:             ListBox gn = processGoogleNews_new(Response);

  15:             return gn;

  16:              });

  17:             //The trick is to wait for both T1 and T2 without blocking the UI thread                      

  18:             Task[] arr = { T1, T2 };

  19:             TaskFactory tf = new TaskFactory();

  20:             //Continue when all acts like a call back ,and does not fire wait on the UI thread,hence the UI is still responsive

  21:             tf.ContinueWhenAll(arr, (a) => {

  22:                 int count = 0, count1 = 0;


  24:                 foreach (string item in T1.Result.Items)

  25:                 {

  26:                     //The listbox returned is iterated to add those values to the UI listbox 

  27:                     count++;

  28:                     listBox1.Items.Add(item);

  29:                 }

  30:                 foreach (string item in T2.Result.Items)

  31:                 {

  32:                     count1++;

  33:                     listBox2.Items.Add(item);

  34:                 }

  35:                 //Computing the difference between the number of posts

  36:                 textBox3.Text = (Math.Abs(count1 - count)).ToString();

  37:                 int time_stop = System.Environment.TickCount;

  38:                 label3.Text = (time_stop - time_start).ToString();


  40:             },new CancellationToken(),TaskContinuationOptions.None,TaskScheduler.FromCurrentSynchronizationContext());

If you go through the code,you would see I have used a continueWhenAll method.I have explained the reason.If you want to know more visit .

As I had promised, I have uploaded the entire app code here.

FeedBack for SmartMetroAlarm

Please provide feedback in the comments section below.I will personally reply to all your feedbacks even if you have not bough the app.

My email id is Feel free to mail me about any issues or app crashes. 🙂


Asynchronous programming Deep Dive part 1

In my previous ,post I had introduced the concept of the wait, wait all, wait any. Lets see a few of these in action. More over lets learn how to efficiently convert our own synchronous code to take the benefit asynchronous and parallel methodologies.

The app

We have a simple app which gets RSS feed of news from Google and BBC . I do not want to go into the

implementation details,or parsing of XML.If you want to see that, you can just download the code.


So Lets click “BBC News” and then “Google” Now, in the figure above you can see 2 numbers have appeared below the textbox. The numbers indicate the amount of milliseconds it took for the respective servers to reply with a response. As you can see , Google servers have a quite high latency of more than a second. The numbers in the textbox indicate the number of headlines.

Now the “Both” Button, synchronously fires both the requests one after the other .The typical time it takes is to the tune of almost 3000 miliseconds.

Now lets see the code. The code or the BBC button is as follows .

int time_start = System.Environment.TickCount;

String url = “;;

WebResponse Response = getResponse(url);

textBox1.Text = processBBCNews(Response).ToString();

int time_stop = System.Environment.TickCount;

label1.Text = (time_stop – time_start).ToString();

Now, the code for Google News button is exactly same only the URL is different and the updation labels are different.

getResponse is an user defined function that sends the Http Get request and receives the response.processBBCNews extracts the title from the XML and puts each one into the List.

   1: private int processBBCNews(WebResponse Response)

   2:         {

   3:             listBox1.Items.Clear();

   4:             XmlReader rdr = XmlReader.Create(new System.IO.StreamReader(Response.GetResponseStream()));

   5:             int count = 0;

   6:             while (rdr.Read())

   7:             {



  10:                 if (rdr.Name.Equals("title"))

  11:                 {

  12:                     string a = rdr.ReadElementContentAsString();

  13:                     listBox1.Items.Add(a);

  14:                     count++;

  15:                 }

  16:             }

  17:             return count;

  18:         }

.In the code behind for the “both” button, we do exactly the something. We first send the HTTP get request for the Google News, then receive the response and process it. Then we fire off another GET Http request for BBC news and get the response. The total time taken is noted.

Now to convert synchronous code to asynchronous code

   1: private void button1_Click(object sender, EventArgs e)


   3: {


   5: int time_start = System.Environment.TickCount;


   7: String url = "";


   9: //We have specified the return type of the code that will beexecuted in the task


  11: //This allows us to use the Task as a function and implement encapsulation,


  13: //hence variables thar are used inside do not need to be declared outside


  15: Task<WebResponse> T = Task<WebResponse>.Factory.StartNew(() =>


  17: {


  19: WebResponse Response = getResponse(url);


  21: return Response;


  23: });

//As we know, that processing of results can only occur when the Response is

//recieved ,hence instead of using

//callbacks we are using continue with.In my series,I will you to think in terms

//of continuations and not callbacks

//Here antecedent refers to the task thats just finished .

   1: Task T2 = T.ContinueWith((antecedent) =>


   3: {


   5: textBox1.Text = processBBCNews(antecedent.Result).ToString();


   7: int time_stop = System.Environment.TickCount;


   9: label1.Text = (time_stop - time_start).ToString();


  11: },


  13: //As we already know the View or the UI can only be updated from the UI thread,hence 


  15: //we are executing the task in the same thread as the Ui thread


  17: TaskScheduler.FromCurrentSynchronizationContext()


  19: );


  21: }


All the explanations are given through the comments .I will share my code in the next blog post where we look at the code behind of the “Both” button.

Shared Variables while using Task Parallel Library

Word of caution while using shared variables.

If you see my previous example, you will see that my calculate PI returns a variable j which is declared local to the function and not the task. So that variable is a shared variable between the UI thread and the Task T thread.

But there are certain things we need to keep in mind while using shared variables since it might lead to Race conditions. Let me illustrate that by a diagram


Look carefully at the below diagram. We have two tasks T and T_new. T operates on a value called variable which is shared between the UI Thread, T_new Thread and itself. So basically what is shown, is that first T operates and populates the value of the “variable”. Then AFTER it finishes, Task T_new reads the value of the variable and computes “Result”. Now the problem here is there is absolutely no guarantee that T will finish before T_New. It might happen that when T_new reads “variable” before T updates its value!!!!

This is a race condition, since both the tasks race to access the variable! We cannot predict beforehand , which task will access the “variable” first!

Sometimes, race condition error is difficult to catch as in Debug mode it may not appear. In Release mode, NET optimizes a lot of routines hence the race condition error may appear then. It is mostly platform specific and hence very difficult to catch.

Now coming back to our calculate PI program, it is free from any Race Condition. The reason being the variable j is first accessed in the Tast T thread. Then in the next thread it is accessed again. But since we have the keyword “continue with”, it can only work when task T has finished updating “j”, hence there isn’t any race condition.

So what do we when we need to wait for a Task to finish before starting another task ?

We use the wait keyword. More on that in the next section

Privacy Policy

None of your data will be used, collected or processed. This application onlygives u tech artciles.  None of your profile details are ever stored or collected. We do not store any password or user data. All the data resides in your machine and goes away when you remove the app.. No IP address, not tinkering, no sneaking information around. Your data, connection, device type and so on will never be collected. Ever! This app simply uses internet to call a Service to manipulate the data. Thats it. So rest assured, all is well .

If you have any questions send us an email to