- 403 userRateLimitExceeded
- 403 rateLimitExceeded
- 429 RESOURCE_EXHAUSTED

These errors are basically flood protection. Your application is running to fast. When you see these errors Google recommends you implement Exponential Backoff.

Exponential Backoff is where your application periodically retries a failed request over an increasing amount of time. It is a standard error handling strategy for network applications. The Google API’s are designed with the expectation that clients which choose to retry failed requests do so using exponential backoff. Besides being “required”, using exponential backoff increases the efficiency of bandwidth usage, reduces the number of requests required to get a successful response, and maximizes the throughput of requests in concurrent environments.

The flow for implementing simple exponential backoff is as follows.

- Make a request to the API
- Receive an error response that has a retry-able error code
- Wait 1s + random_number_milliseconds seconds
- Retry request
- Receive an error response that has a retry-able error code
- Wait 2s + random_number_milliseconds seconds
- Retry request/li>
- Receive an error response that has a retry-able error code
- Wait 4s + random_number_milliseconds seconds
- Retry request
- Receive an error response that has a retry-able error code
- Wait 8s + random_number_milliseconds seconds
- Retry request
- Receive an error response that has a retry-able error code
- Wait 16s + random_number_milliseconds seconds
- Retry request
- If you still get an error, stop and log the error.

In the above flow, random_number_milliseconds is a random number of milliseconds less than or equal to 1000. This is necessary to avoid certain lock errors in some concurrent implementations. random_number_milliseconds must be redefined after each wait. If you are using any of the Google Client libraries you shouldn’t have to worry about this most of them have already implemented this for you.

Exponential backoff works just fine. My application detects an error and waits a few milliseconds and then tries it again. The great thing is that 90% of the time I only need to wait and retry once before I get data back. The problem is that it still bothers me that I am getting the initial error message to begin with.

So I have come up with a way of effectually keeping track of how many request my application was making and then slowing it down just a little before Google would begin to detect flooding.

This class is designed to allow you to keep track of the number of request you are making to Google. The time each request is made is logged by the class it tracks how many requests have been made within a given amount of time. If to many have been made then it sleeps just enough to prevent flooding the server. This does not work 100% of the time but it will reduce the number of flooding errors you get significantly.

Google recommends that we implement Exponential Backoff when we encounter any of the RateLimitExceeded errors. I personally like to take things one step further and try to limit the amount of flooding my applications do. For the most part Google allows us to access there APIs free of charge I like to be polite when accessing their systems.

]]>In this tutorial series I am going to break a bit from my normal C# Google tutorials. Over the last several months I have seen a lot of people talk about algorithm performance and big O. This is something I learned a very long time ago in collage. After more then 20 years a lot of this has become instinct to me rather then something I was doing with direct intent. When designing an algorithm it is best to know exactly what you are doing and why you are doing it. As someone who has been an application developer for more then twenty years somethings become more instinctive and less purposeful. I have always enjoyed writing tutorials so I thought that I would write a tutorial series on this, in order to get it strait in my own head. I intend to put my normal spin on the series. This will be as simple as possible and the examples will just work.

- A gentle introduction to algorithms and bigO
- Algorithms Constant Time O(1)
- Algorithms Linear time O(n)
- Algorithms Logarithmic Time O(log n)
- Algorithms Quadratic time O(n
^{2})

Logarithmic time implies that an algorithms run time is proportional to the logarithm of the input size. In Big O notation this is usually represented as O(log n).

For those of us who have not been in School in a while.

In mathematics, the logarithm is the inverse operation to exponentiation. That means the logarithm of a number is the exponent to which another fixed number, the base, must be raised to produce that number. In simple cases the logarithm counts factors in multiplication.

If that explanation wasn’t much help to you either I recommend you go and watch this video. Its been a very long time since I was in school after searching high and low. I ended up watching a video about it on khan academy I recommend you watch it.

I actually learned binary search after bubble sort. I found it just smart. Lets go back to our book example from the first article. You have a book in front of you, I want you to find page 62. How would you proceed? You could use a linear option and just start at page one and work your way until you hit page 62. Worst case if there was 62 pages in your book you would have to flip though 62 pages to find the correct page. What if I told you there was a better way. A faster way?

This is a binary search. With a binary search we find the middle point of the data and check if the number we are looking for is higher or lower. So if we have 100 pages and we are looking for page 20. Then the middle would be 50.

- 1.We then check if 20 is higher or lower then 50.

- 2. Its lower so we split 50 in half and get 25.

- 3. We then check if 20 is higher or lower then 25.

- 4. Its lower so we split 25 in half and get 13

- 5. We then check if 20 is higher or lower then 13.

- 6 Its higher so we split (25 + 13) in half and 19.

- 7. We check if 20 is higher or lower than 19.

- 8. its higher so we split (25 + 19) in half and get 22.

- 9. We check if 20 is higher or lower than 22.

- 10. Its lower so we split (19+22) in half and get 20.

- 11. We check and find that 20 is the correct number.

So if we had used the linear solution we would have had to check 20 pages in order to find the correct page. However with binary search we only had to do 11 checks. With each check we are making we are throwing away half of the data. This makes Logarithmic Time algorithms very efficient for large sorted lists of data.

Binary search is a logarithmic algorithm. Doubling the data only means that you need to preform one extra split. This can be very effecting for searching large sorted lists.

]]>In this tutorial series I am going to break a bit from my normal C# Google tutorials. Over the last several months I have seen a lot of people talk about algorithm performance and big O. This is something I learned a very long time ago in collage. After more then 20 years a lot of this has become instinct to me rather then something I was doing with direct intent. When designing an algorithm it is best to know exactly what you are doing and why you are doing it. As someone who has been an application developer for more then twenty years somethings become more instinctive and less purposeful. I have always enjoyed writing tutorials so I thought that I would write a tutorial series on this, in order to get it strait in my own head. I intend to put my normal spin on the series. This will be as simple as possible and the examples will just work.

- A gentle introduction to algorithms and bigO
- Algorithms Constant Time O(1)
- Algorithms Linear time O(n)
- Algorithms Logarithmic Time O(log n)
- Algorithms Quadratic time O(n
^{2})

Constant time implies that the number of operations the algorithm needs to perform to complete a given task is independent of the input size. In Big O notation we use the following O(1).

In layman’s terms that means that no matter how much data you add the amount of time to preform a task will not change. Back to our book example you could have 100 pages or 10000000 pages the time to find the page wouldn’t matter.

One of the ways I like to store sets of data is using a dictionary or a hash table. I like to think of the key in the hash table as a primary key in a relational database table. As with primary keys the key in the dictionary must be unique. Lets say you have a list of users each user has an email address. Email address’s must be unique in your system no two users will have the same email address.

I can create a dictionary to store the data.

`Dictionary<string,person> data = new Dictionary<string, person>();`

data.Add("test@test.com", new person { Name = "Linda Lawton", Email = "test@test.com" });

I load the data into my dictionary making the key equal to the users email address. If my application ever needs to find a specific user I simply look for an item in the dictionary with a key equal to the users email address. I wont need to search though a list of all the users I can go directly to the item i am looking for.

`var linda = data["test@test.com"];`

It wont matter if I have 10 people or 100000 people in my dictionary it will take the exact same amount of time to find this user.

Here is another good example of constant time. Lets say you work at a dinner. When the dishes are washed each clean plate is placed on the top of the stack of plates. When a new customer arrives you take the first plate off of the stack. Stacks are last on first off it doesn’t matter how many items there are in the stack when you want to remove an item it will always be the last item you placed on top of the stack.

Conclusion

Before you even begin to code a new algorithm you must first consider things about the data it self. Determine out what your best and worse case will be, look at the amount of data you have, and the specs of the machine that will be running the algorithm.

If you can design your code to run in constant time it wont matter how much data you have it will always run at the same speed.

Join me in the next tutorial Algorithms Linear time O(n).

]]>In this tutorial series I am going to break a bit from my normal C# Google tutorials. Over the last several months I have seen a lot of people talk about algorithm performance and big O. This is something I learned a very long time ago in collage. After more then 20 years a lot of this has become instinct to me rather then something I was doing with direct intent. When designing an algorithm it is best to know exactly what you are doing and why you are doing it. As someone who has been an application developer for more then twenty years somethings become more instinctive and less purposeful. I have always enjoyed writing tutorials so I thought that I would write a tutorial series on this, in order to get it strait in my own head. I intend to put my normal spin on the series. This will be as simple as possible and the examples will just work.

- A gentle introduction to algorithms and bigO
- Algorithms Constant Time O(1)
- Algorithms Linear time O(n)
- Algorithms Logarithmic Time O(log n)
- Algorithms Quadratic time O(n
^{2})

Linear time is a concept where by time is seen sequentially, as a series of events that are leading toward something: beginning, and an end.

I like to think of linear as step by step. Start at the beginning and slowly work your way to the end. With programming a for loop is a great example of linear time.

`for i := 1 to 100 do`

print i;

do

Each time we step though the loop is incremented by one. Having your algorithm work in linear time can be quite efficient.

Lets say that we have an array of integers.

`var data = new int[] { 1, 2, 3, 4, 5, 6, 7, 8, 9 };`

How would you go about finding the number 6. There are a few ways of doing it.

```
public static int? FindPositionOfNumberWitinArray(int[] data, int find)
{
for (int i = 0; i <; data.Length; i++)
{
if (data[i] == find)
return i;
}
return null;
}
```

The method above loops though each item in the array testing it until it finds the correct item. Lets think of some edge ceases. Best case the item we are looking for will be the first item worse case it will be the last. How many items there are in the array is going to tell you how long the method will run. Every time we increase the amount of data we have we are going to be potentially increasing the run time.

Linear search runs at worst case in linear time. That means that if you have n items in your array and the item you are looking for is the last one you are going to have to make n comparisons.

Can you think of any way to improve the above search? I can think of one if we knew the size of our array and the data was sorted. Then we could test to see if the number was towards to the end of the array we could start our loop backwards.

```
public static int? FindPositionOfNumberWitinArray(int[] data, int find)
{
for (int i = data.Length - 1; i > 0; ; i--)
{
if (data[i] == find)
return i;
}
return null;
}
```

Our new worse case would be if the item was exactly in the middle. Which is better than before, but this is still a linear search. However if our data was something like this

`var data = new int[] { 1, 10, 11, 12, 13, 14 };`

If we want to find number 11 we might think that starting at the end would be better but it wont. This is why you really need to know your data.

Before you even begin to code a new algorithm you must first consider things about the data it self. Determine out what your best and worse case will be, look at the amount of data you have, and the specs of the machine that will be running the algorithm.

An algorithm is said to take linear time, or O(n) time, when its worst case complexity is O(n). This means that the more data you have the more time it will take to process it, the increase is linear or in a line. Each item in the algorithm will have to be processed one at a time. Linear time is the best possible time complexity in situations where the algorithm has to sequentially read its entire input. Another example would be adding up each value in an array.

Join me in the next tutorial Algorithms Quadratic time O(n^{2}).

- A gentle introduction to algorithms and bigO
- Algorithms Constant Time O(1)
- Algorithms Linear time O(n)
- Algorithms Logarithmic Time O(log n)
- Algorithms Quadratic time O(n
^{2})

algorithm ˈalɡərɪð(ə)m/ noun

noun: algorithm; plural noun: algorithms

a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.

An algorithm is just a set of instructions to needed to solve a problem.

Imagine I place a book on the desk in front of you. Now I would like you to find page 42. How would yo begin? Well one way to start would be to open the book and start flipping pages one at a time. This is called a linear search. In order to find the most efficient algorithm for our problems we need to consider a few things.

When we work with algorithms in computer programming we need to be mindful of a few things.

- How much data is there?
- Is the data already sorted?
- How much work do we need to do to solve this problem?
- How much power does the machine we will be preforming this on have?

Lets go back to our book example. Lets say there are 600 pages in your book and I want you to find page number 600. If we stick with our linear search option we are going to have to flip though 599 pages before we find the page we are looking for. This is called an edge case or worst case. Now if we are looking for page one we will find it the first page we check this is also an edge case and is called best case. So worse case we will have to look though all 600 pages in our book to find the right page and best case it will be the first page.

Back to our book again it still has 600 pages. What would happen if all the pages had fallen out of the book and the were no longer in order? Now find page ten. Can you imagine how much time it would take now? Assuming we are smart enough to put the pages we have already checked aside, worst case we are going to have to check 599 pages before we find page 10.

That is an interesting concept with algorithms. Best case and worst case. Best case above would be if you picked up page 10 first time worst case would be if you find it last. I always try and think of what the best and worse case would be for my algorithms.

If we knew for a fact that there would only be 50 pages in our book it wouldn’t matter if you had to go though all 50 pages to find the one you were looking for. However if our book could potentially have 10000000 pages and we needed to find the last one. The amount of power the machine you will be running on will begin to matter a great deal. A faster machine could run though the pages a lot faster then a slower one.

Before you even begin to code a new algorithm you must first consider things about the data it self. Determine out what your best and worse case will be, look at the amount of data you have, and the specs of the machine that will be running the algorithm.

Join me in the next tutorial Algorithms Constant Time O(1).

]]>

- A gentle introduction to algorithms and bigO
- Algorithms Constant Time O(1)
- Algorithms Linear time O(n)
- Algorithms Logarithmic Time O(log n)
- Algorithms Quadratic time O(n
^{2})

With Quadratic time an algorithms run time is proportional to the square root of the amount of data. The Big O notation for Quadratic time is O(n

^{2}).

Programmatic speaking Quadratic time algorithms are normally denoted by nested for loops. If your array has 10 items and your algorithm is running in Quadratic time your going to be doing 10 * 10 = 100 steps. When we were looking at linear time it was only been 10 steps. So by simply adding one additional item (11 * 11 = 121) you are adding 21 additional checks to your algorithm.

Bubble sort is normally the first sorting algorithm we learn. It can be quite simple to understand and implement, however its efficiency is not all that good as we will see.

When I run the above code and give it an unsorted array of 100 items. The results are as follows.

Enter length of array. (int)

100

Array contains ’99’ items.

I preformed ‘4950’ checks

Execution time ‘0’ minutes ‘0’ seconds ‘0’ Milliseconds

That’s not to bad right? Runs quite quickly.

Now lets see what happens When I run the above code and give it an unsorted array of 60000 items. The results are as follows.

Enter length of array. (int)

60000

Array contains ‘59999’ items.

I preformed ‘1799970000’ checks

Execution time ‘0’ minutes ’14’ seconds ‘344’ Milliseconds

Depending upon the speed of your machine it make run faster or slower for you. As you can see the more data I add the slower the algorithm becomes. This is why it is very important to know your data when deciding which algorithm to use. If you know you will never have more then 100 items then using a bubble sort would be fine. However if in a few years your data could grow to a million items using a bubble sort would be a very bad idea.

You also need to consider the speed of the machine. Running this algorithm on my desktop was not a problem what if you were running it on something smaller, would you see a difference?

You should now understand why it is important to know your data when deciding which type of algorithm to use. The amount of data and the power of the machine that is running the algorithm is very important when deciding which type of algorithm to use. Using a Quadratic time Algorithms O(n^{2}) algorithm to sort 100 items may not seam bad but what if you were running this on an Arduino board?

Join me in the next tutorial Algorithms Logarithmic Time O(log n).

]]>A are you trying to delete files from the Share with me folder on Google Drive? Do you just want to list the files in the share with me folder on Google Drive. I am going to show you how to do both of those things using the C# client library and the Google Drive V3 API.

The user doesn’t really own the files that appear in share with me. These are files that have been shared with the user they have access may have access to read or write to the file in question but they can not delete it as they do not own the file. That is why the following command will not work

service.Files.Delete(fileId).Execute();

In order to remove the file you must simply remove the permissions on the file granting the user access. The user does have access to remove the permissions on the file.

Note: This does not appear to work in all cases. I have a file on my Google drive that was shared with me by what appears to be a service account. I have no permissions on the file there for i cant remove my access. I am still digging.

It is possible to remove files from the share with me directory on Google drive. We simply remove our permissions on the file. I would like to thank KrishnaKanhaiya for asking this question on the site here and then posting it directly up on StackOver flow. This was a very good question and not something I have thought of trying before.

]]>Occasionally while working on a project I have need to test some calls to Googles APIs manually. To do that you need an access token. Getting an Access token can be a pain sometimes.

So i have created a simple CURL script that will show you how to authenticate to Google and get an access token.

You will need to go to Google developer console and create a client id for this its easier to use a type other client. By using type other we can avoid the need for a redirect URI, we don’t really need one as we are just going to run this as a curl script. If you are really worried about security you can lock the client to your ip address.

What you need are the following:

- Client ID
- Client Secret
- Scopes – the scopes define what access you will receive. you can have more then one just put a space between them. For this i am just going to use openid.

Now replace the values needed in the following link and put it in a web browser

https://accounts.google.com/o/oauth2/auth?client_id=[Application Client Id]&redirect_uri=urn:ietf:wg:oauth:2.0:oob&scope=[Scopes]&response_type=code

You should get the standard request for authentication. Once you have accepted authentication copy the Authentication code. Take the following code replace the values as needed.

curl \

–request POST \

–data “code=[Authentcation code from authorization link]&client_id=[Application Client Id]&client_secret=[Application Client Secret]&redirect_uri=urn:ietf:wg:oauth:2.0:oob&grant_type=authorization_code” \

https://accounts.google.com/o/oauth2/token

You should get something like this:

{

“access_token”:“XXXXX”,

“expires_in”:3600,

“id_token”:“XXXXX”,

“refresh_token”:“XXXXX”,

“token_type”:“Bearer”

}

Congratulations you now have an access token you can use in your Google API call. Just remember to use access_token= and not key= there is a difference.

If your access token expires you can use the following command to refresh it using the Refresh token.

curl \

–request POST \

–data ‘client_id=[Application Client Id]&client_secret=[Application Client Secret]&refresh_token=[Refresh token granted by second step]&grant_type=refresh_token’ \

https://accounts.google.com/o/oauth2/token

The response will be slightly diffrent this time. You wont get a new Refresh token.

{

“access_token” : “XXXXX”,

“expires_in” : 3600,

“id_token” : “xxxxx”,

“token_type” : “Bearer”

}

We can use a couple of simple curl commands to get an access token for use with Google APIs. There is a public Gist up on Github for this **GoogleAuthenticationCurl.sh**

As you all know I am a windows developer, most of what is either background services or windows desktop applications. I have also been working with Google Analytics for a number of years now. One of the issues I have had is that there was no SDK for windows. There is an Android SDK and an IOS SDK but there was nothing for windows developers. I have spoken with the Google Analytics team about this for a number of years and their opinion was that it wasn’t a priority. Understandable considering we are talking Microsoft VS Google here. Over the years i have created a few simple trackers myself to address the issue but they were never something i was willing to opensource myself. Nor did i feel up to the task of creating an SDK from scratch alone.

A few months ago the Google Analytics team informed me that they were contacted by Microsoft and told that they were working on an SDK. I jumped at the chance to assist them. The project is now live on Github.

dotnet/windows-sdk-for-google-analytics

There is also a Getting started page which is part of the documentation wiki on the project. I will be working on extending the documentation over the next several weeks to ensure that the SDK is fully documented. You can of course expect a number of tutorials on this from me in the figure.

We FINALLY have an official Google Analytics Windows SDK!

]]>I recently ran across a question on Stack overflow. The question was quite simple how to retrieve the folder values of from Google drive and display them in a directory list using C# and the Google .Net client library.

I have used a PageStreamer in the ListAll method in the event that there are more then 1000 files in the directory this will ensure that we get them all back. PageStreaming is much easier then having to deal with the nextPageToken yourself.

After we have all of the results then PrettyPrint is running recursively to request all of the files from within any directories.

By using PageStreamer you can retrieve all of the rows for your request rather then having to worry about the nextPageToken yourself.

Note: I am not responsible for the usage of your quota if you print everything

]]>