Azure storage evolved from the very first iterations of Azure, when it was simply platform-as-a-service, so you could come-in and run workloads like websites and databases and today you can run much more including infrastructure virtual machines, but when it began it was more for running platform applications, applications such as websites and databases. Azure storage grows as you need it and it grows instantly, you can use this for an extremely small amount of data that access very infrequently and very slowly or to extremely large, extremely demanding high-performance workloads that run on SSD storage and have hundreds of thousands of IOPS. You don’t have to think about the underlying implementation of storage all you need to do is define storage and azure takes care of the rest.
You don’t have to worry about things like the load-balancing, you don’t have to worry about things like redundancy, about making sure that data is written in multiple locations in fact within Azure before a piece of data is considered written, it’s got to be stored in at least three individual locations. So when you write a file before Azure returns and says, success the file is written, it’s been stored in at least three locations. Azure storage is designed to support pretty much any workload or any application that you have.

There are two tiers of storage: Standard and Premium.

Standard

Standard storage is what most applications will use and it’s cheaper and a bit slower. You pay for what you consume. If you have 1 TB VHD and if you have only 100 MB data on it you will pay only for those 100 MB not for the whole size. It is not the disk size what it counts it is the data you have on it (Standard disk cost per transacation and per GB). Standard storage performance tier allows you to store Tables, Queues, Files, Blobs and Azure virtual machine disks.

Premium

Premium disk aren’t charged by transaction. It’s more of a flat fee model. Premium
storage is based on SSD storage as opposed to hard drive storage we have in standard tier. You have to do some calculus on one hand keeping costs in mind, and in the other hand your need for IOPS, input output operations per second. How fast do you need the storage sub-system to be for your IaaS VM workloads? If they’re just doing relatively low horsepower tasks, maybe serving DNS or maybe some light IIS web, you may be fine with standard storage. By contrast, if you’re doing a lot of random IO and you’re hosting, let’s say, IaaS based SQL servers or MySQL database servers, then you may want to look at premium for them. Here we get charged for the disk size and not data written. Premium storage performance tier only supports Azure virtual machine disks.

WHEN TO USE STANDARD STORAGE AND WHEN PREMIUM ????

Well standard storage is when you’ve got more than one VM doing the same job. So again, you could have two domain controllers on standard storage in an availability set. Then you’re getting a great SLA, and they’re going to work quite well or multiple webservers. So domain controllers are good. Maybe remote desktop brokers, not very busy. Some web servers. But standard storage just does not cut it when you’ve got disk-intensive applications. So busy file servers with many, many users are not going to work quite well. SQL databases outside of dev and test just really won’t work well without performance, without premium disks. SharePoint servers, forget about it. They’re just really going to need premium disks. And of course, remote desktop session hosts, as we discussed earlier on, many, many users accessing a single operating system with multiple read and writes to a disk are definitely going to need premium storage. So application and database workloads really do need that premium storage.

Storage Account SLA

Another important peace when configuring storage accounts. I included a link where you can find all info regarding SLA so please read it.

STORAGE ACCOUNT SLA

Let’s see how we can create storage accounts and let’s explore storage account properties.

Login to Azure Portal and click on the Storage Account Blade in the left pane

2019-01-05 23_33_12-Window.png

(If you don’t see storage account blade in the left pane, click on All services and type in storage account. Click on the star to add it in the favorites.)

2019-01-05 23_35_00-Window.png

Click on the + Add

2019-01-05 23_37_44-Window.png

Let’s first focus on the Basics Tab

Select you subscription and the resource group. If you don’t have one you can click on Create new. Next give your storage account a name and select location. We already discussed about Standard vs Premium storage. I will choose Standard for this example.

Now, under Account Kind we have 3 options to choose.

Storage V2 (General Purpose V2) –> these storage accounts support all of the latest features for blobs, files, queues, and tables. GPv2 accounts support all APIs and features supported in GPv1 and Blob storage accounts. They also support the same durability, availability, scalability, and performance features in those account types. Pricing for GPv2 accounts has been designed to deliver the lowest per gigabyte prices, and industry competitive transaction prices. General-purpose v2 accounts are recommended for most storage scenarios so I will not focus on V1.

BLOB STORAGE –> they support all the same block blob features as GPv2, but are limited to supporting only block blobs and append blobs, and not page blobs.

Our next step is to choose Replication. There we have 4 options to choose.

Locally-redundant storage (LRS) –> data is stored in a datacenter in the region in which you created your storage account. LRS is the lowest cost option and offers least durability compared to other options. In the event of a datacenter level disaster (fire, flooding etc.) all replicas might be lost or unrecoverable. So with locally-redundant storage you get three copies within an Azure region of the data that gets stored in your storage account.

Zone redundant storage –> ZRS replicates your data synchronously across multiple availability zones. Consider ZRS for scenarios like transactional applications where downtime is not acceptable. It gives us read and write data even if a single zone is unavailable or unrecoverable. With zone-redundant storage you get three copies within two data centers.

Geo-redundant storage –> It replicates our data to a secondary region that is hundreds of miles away from the primary region. If your storage account has GRS enabled, then your data is durable even in the case of a complete regional outage or a disaster in which the primary region is not recoverable. For a storage account with GRS enabled, an update is first committed to the primary region. Then the update is replicated asynchronously to the secondary region, where it is also replicated.
Geo-redundant storage means that you’ve got three copies in a primary data center and three copies in another data center. We cannot access secondary site if there is no failover.

Read-access geo-redundant storage –> it maximizes availability for your storage account. RA-GRS provides read-only access to the data in the secondary location, in addition to geo-replication across two regions. Read-access geo-redundant storage where you’ve got, again, six copies total, three in one region, three in another, and that gives you the ability to read the data in both of those regions.

INFO! Key to point when talking about Performance (Standard vs Premium) is that you’ll notice here while the performance is set to standard and you can click the replication dropdown, you’ve actually got more replication options with standard than you do with premium, so if you set it to premium, you’ll notice here that you only get locally-redundant storage and so that’s kind of the difference between those two.

Access tiers : Cool and Hot storage can be chosen as the default access to you for an entire storage account, whereas you have to set the Archive tier per blob.

HOT –> This is the default option. Hot storage is for data that you know you’ll need to access very frequently. Hot data is always at the ready when you need it, and if you know you’re going to need to access your data at least once a month, you should keep it Hot. Accessing data in the Hot tier is most cost-effective, while storage costs are somewhat higher.

COLD –> is optimized for storing large amounts of data that is infrequently accessed and stored for at least 30 days. Storing data in the Cool tier is more cost-effective, but accessing that data may be somewhat more expensive than accessing data in the Hot tier.

ARCHIVE (We will see later how we can enable this) –> Consider using the Archive tier if you don’t expect to access your data within about six months. When you set a blob to use the Archive tier, you need to expect that when you do access it it’ll take some hours to retrieve it for you. Archive storage is actually being stored offline for you, so that’s why it takes a while to bring it up when you need it. And keep in mind that when you do access your Archive data you’re going to pay more for that access cost.

Once done click on Next:Advanced

2019-01-06 00_08_31-window

Here we need to decide if we are going to use Secure transfer or not. Default is Enabled but for this example I will select Disabed.

Secure transfer required –> you’ve got the ability here to require secure transfer, so that would require HTTPS based connections when you’re accessing those blobs. Now one of the things you want to keep in mind here with the secure transfer required setting is that if you add a custom domain to your storage account, meaning you want to be able to access your files using a custom domain instead of the one provided by Azure, it will actually happen over HTTP because they don’t have a certificate with your domain name on it.

I will keep the defaults for the rest of the settings and click on review+ create

2019-01-06 00_14_00-Window.png

And after a few seconds our storage account will be created.

2019-01-06 00_21_51-Window.png

and we will be able to see it under storage accounts blade. When we click on our Storage Account, first that it comes is Overview. At the top you will see general info about our storage account, which resource group it belongs, location replication etc.

Open in explorer –> This will allow you to open storage account in Storage Account Explorer. When you click on it you will first have to download it. What is new in azure is that Storage Explorer is in Preview and you can use it directly in portal.

Under the general info we have service section. Blobs, Files, Tables and Queues. We will focus on Blobs and Files.

2019-01-06 09_04_07-Window.png

 

Let’s start with the Blobs.

2019-01-06 09_22_46-Window.png

Azure Blob Storage is basically designed to store large amounts of unstructured text or binary data, and in fact, blob actually stands for binary large object, and we can use blob storage to store files like virtual hard disks maybe videos that you want to stream from an application or images for a web application, and even logs files made up of plain text.

We have 3 different blob types:

  • PAGE BLOBS –> these are the blob types to store virtual hard disks in Azure Storage. So anytime you build a virtual machine, it’s got virtual disks, those virtual disks will be stored as page blobs, and these page blobs are optimized for random read/write operations.
  • BLOCK BLOBS –> Now when it comes to working with other file types, like images and movies, pictures,backup etc. those would be stored as block blobs. So block blobs are compromised of multiple blocks of data and, for example, a storage client like the C# storage client has the ability to break up a file into multiple chunks or multiple blocks, and upload those chunks in parallel. So it’s optimized for those upload scenarios, and of course, that decreases the upload time. Probably we will use this one the most.
  • APPEND BLOBS –> these are typically used with text files or log files where it’s common that you actually are appending a line item to the end of a file, so this is really common with log files. And that’s sometimes you would use an append blob for that typical operation because the file would be optimized for that.

 

WORKING WITH CONTAINERS IN STORAGE ACCOUNT

A container organizes a set of blobs, similar to a directory in a file system. A storage account can include an unlimited number of containers, and a container can store an unlimited number of blobs.

When you click on Blobs service you will be able to create new containers.

2019-01-06 10_44_05-Window.png

To create a new container click on the + Container

2019-01-06 10_45_45-window

Give your container a name and after we need to select public access level.

  • Private –> means that nobody can get to the data or the blobs inside this container that I eventually put in there, over the public internet.
  • Blob –> If I wanted to make these public, I could set the access type to blob and anybody that knew the URI to the blobs that end up going into this container would be able to publically access them over the internet.
  • Container –> I could also make the entire container readable or listable and see all the blobs in there and make all the blobs publically available over the internet as well.

I will select Blob in this example. Once done click OK

 

2019-01-06 10_46_50-Window.png

and you can see that down at the bottom the test container was created successfully.

2019-01-06 11_22_35-Window.png

One of the things you’ll notice working in the portal is that working with blobs in the user interface here is really basic, you can do an upload and a download. Click on Upload

2019-01-06 11_29_40-Window.png

Select the file you would like to upload and click on the Advanced. I will use default auth type (SAS – shared access signature we will talk more about this a little bit later) and under blob type we need to decide which type we will use and this is based on the content we are going to upload. I will upload one image so Block blob is a good option. Remember that block blobs are going to be the blob type that you most commonly use. So images, like this png file or jpgs, videos and things like that. Then we can select the block size and I don’t have any folders an so I will skip this for now. Once done click on Upload

2019-01-06 11_33_19-Window.png

and here it is.

2019-01-06 11_37_57-Window.png

If you would like to download this you will need to click on the ellipsis (…) and select download

2019-01-06 11_41_18-Window.png

ARCHIVE TIER

Before we proceed I would like to show you third tier and how you can enable it. It is per blob. Click on the ellipsis (…) and select the blob properties

2019-01-08 16_07_58-window

Under the Access Tier you will be able to select Archive.

2019-01-08 16_08_25-window

Now if you want to do any sophisticated type of controls, one of the things you can do is come into the cloud storage account and click on this button here that says Open in Explorer. If you don’t have it you will need to first download it. When you click on Open in Explorer you you will have the option to download it.

2019-01-06 11_42_35-Window.png

If you don’t want to download it, you can run it directly from the portal. Currently it is in Preview so you will not get all of those features we have when we download the full version.

2019-01-06 14_05_06-Window.png

I already have it from before so I will run the version I have locally on my system.

CONFIGURE HIERARCHIES IN BLOB

You might have been looking at this view in the portal and kind of wondering where’s the buttons to create a subfolder or a sub-container here in our test container and the answer is there’s no button there, because that concept doesn’t actually exist. The concept of a container is that it’s basically like a root folder on a file system, all the blobs that go into the container go into that root folder, there’s no subfolders. However, it is possible for you to create kind of a hierarchy by using storage account explorer.

When you run Storage Account Explorer you will need to sign in with you Azure account. Once done, select the subscription and you are good to go.

2019-01-06 11_51_52-Window.png

Expand your subscription –> storage accounts –> your storage account –> blob Containers and click on your Container. Once done click on the New Folder

2019-01-06 11_55_37-Window.png

Give your folder a name and click on Ok

2019-01-06 11_57_42-Window.png

Now if you go to the portal and hit refresh you will not be able to see anything and that is because you need to upload the blob to it first. In Storage Explorer click to Upload. You will then have 2 options. To upload a folder or a file

2019-01-06 11_59_22-Window.png

I will upload 1 image. Click on Upload

2019-01-06 12_00_42-Window.png

Now if we go back to the portal and if we hit refresh, our folder will appear.

2019-01-06 12_01_45-Window.png

Storage Explorer in Azure Portal

Steps are the same if you choose to run it in portal. OBS!! Storage Explorer is in Preview so if you want to have all capabilities be sure to download the full version.

2019-01-06 14_07_37-Window.png

Access policy set on the parent container or on the container itself, does not apply here on the My Images virtual folder, but it does apply to the actual objects underneath it.

So that’s kind of the idea on how you would create a simulated hierarchy in your storage account and create kind of the perception that there’s subfolders inside the container.

There is a lot to cover and to avoid very long posts we will continue with our storage accounts in the Part 2 and the Part 3.

Stay Tuned!

Cheers,

Nedim