This is the second part of storage account series. We will continue our journey with Files service section. Click on the Files area
Before we continue you need to know that you cannot connect 2 VMs to the same virtual machine disk. If you want to do shared storage, you have to create a file share, and then you can map those file shares from multiple virtual machines at the same time.
OBS!!! If you create a PREMIUM performance model you will not be able to create a file share. Premium storage accounts are used for VMs. You need to create new storage account and use Standard type.
When you click on Files, that will allow you to create a File Share. Click on + File Share
Give you share a name and set the quota. Once done click Create
Click on your file share. What we could do here is we could upload files, we could create directories, we have some basic controls here, we could go back and kind of change the Quota if we want to. But the key to making this work is by using this Connect button, this will give you the syntax to map the drive.
Click on Connect button
So kind of looking at the top here, we have 2 options to connect from windows and one for connecting from linux.
Click on copy and paste it in powershell and execute the command.
AZURE FILE SHAPSHOTS
Azure Files provides the capability to take share snapshots of file shares. Share snapshots capture the share state at that point in time. A share snapshot is a point-in-time, read-only copy of your data. You can create, delete, and manage snapshots but you cannot modify it. Share snapshots are incremental in nature. Only the data that has changed after your most recent share snapshot is saved. The maximum number of share snapshots that Azure Files allows today is 200. After 200 share snapshots, you have to delete older share snapshots in order to create new ones.
Let’s see how we can create File Snapshot.
Click on your Storage Account –> Overview –> Files (Services Section) –> Your File share and click on Create Snapshot
Once done, we can click on View Snapshot to see our snapshot. If we click on the snapshot it will give us info what we have, we can connect to it and we can restore files from it as well.
I will delete my image from file share so that we see how we can restore files from snapshot.
Now, once that file is gone, we can restore files from snapshot by clicking on View Snapshot –> Choose from which snapshot you would like to make a restore and click on it.
When we click on the Restore, we will have 2 options. Restore as a copy or overwrite. If you choose first option you will need to specify file name. I will select Overwrite and click OK.
and that’s it. Our file is restore.
SHARED ACCESS SIGNATURE
A shared access signature is a way for you to give somebody limited access to some of the objects that we have in Azure Storage or even some of the services. Access Keys are basically like an administrator password on your Windows machine or like the root password or your Linux root user so you don’t want to lose these access keys, you don’t want anybody to have these that shouldn’t have them.
Using a shared access signature is an alternative to giving somebody the keys to the kingdom and so really the idea with the shared access signature is you use these keys to generate a signature that somebody can use in the URL when they’re accessing an object.
Before we proceed with this let me change the Access Policy on my Blob to Private. (Click on your storage account –> Overwiew –> Blobs –> Your Container — Access Policy) If you remember I hade one image in my test container and if we go to the access policy (this is what I want to point out) we had it set to Blob. That was the public access level, so we have anonymous read access for the blobs in this container.
So we could just get the actual URL to this image and we can see that right over the internet with no authentication.
If we want to secure the access to this object that are in this container, we can change the access policy and instead of blob we can change it to Private (no anonymous access). Click Save.
This will give us a use case for using the shared access signature. To access shared access signature click on your Storage Account –> Shared Access Signature, under the Settings
Here we have couple of options that we can configure.
- Allowed Services –-> Here we can choose what the users can access. Here I will choose only BLOB service.
- Allowed Resource Types –> Here is the Object. We don’t want them to have access to services or Containers (In this case).
- Allowed Permissions –> I will leave only read, so that the users can only have the ability to read a blob in the test container.
Next section is Start and Expiry date/time and allowed IP’s and protocols.
- Start and expiry data/time –> Now one of the things you want to keep in mind, when it comes to generating a start time for this, there is the possibility of clock skew, depending on the systems that other people might be using. So the recommendation for Microsoft is to set this about 15 minutes in the past if you’re going to build a shared access signature with a start time. We can set these dates out much further in the future if needed.
- Allowed IP addresses –> we don’t need to specify nothing here but we have the option to limit the access or which public IP address is allowed to come and access this content.
- Allowed Protocols –> Default is HTTPS only, so it is a good idea to use it.
Last is Signing key,
We’re going to use one of the keys, so key1, within our Access Keys, to sign and generate the shared access signature.
Last step is to click on Generate SAS and connection string. What we see at the bottom is that we have the SAS token, which is the query string you use when you are accesing the blob and we have sas url. As you can see we have connection string as well.
OBS!!! This signature was built using the access key that we have in our account, so keep in mind that if key1 ever changed, that would invalidate this signature as well.
Now, to use this you would need to copy the blob URL and paste it in front of the SAS token.
Firewalls and virtual networks
The firewalls and virtual networks feature of Azure Storage allow you to lock down your storage account very strictly by limiting access to certain networks or IP address ranges. Now, as we all know, one of the biggest advantages of the internet and the cloud is that global accessibility. That accessibility causes you a bit of stress though when you know precisely and exclusively who you want to have access to your data. Every object in your storage account has a URI endpoint, and requests can be made to it from anywhere in the world. They might come from known IP address ranges, like maybe your office building or that of your partners. They might come from another application or a virtual machine in Azure that’s part of a virtual network, or they may come from anywhere else. With firewalls, we can allow certain IP address ranges, and with virtual networks, we can allow certain Azure virtual networks. All other requests will be denied.
A few important points to note. First, these restrictions affect both the REST and the SMB protocols. Second, remember that SAS tokens can be generated that give users access with a number of restrictions, and one of those restrictions is the IP address. Now those IP addresses that you specify in the SAS token limit access, but they don’t extend access beyond the network rules that you configure with the firewall. Finally, if you restrict access to certain IP addresses or virtual networks, you’ll be restricted from managing your storage account even through the portal, unless you’re browsing from a machine that’s in the range or on the network.
To access Firewalls and virtual networks click on your Storage Account and under Settings you will find it. By default, access to a storage account is open to all networks. This doesn’t mean that just anyone’s allowed in. You still have to authenticate and be authorized to access resources, but that holds true even after network rules have been established. You might have a perfectly valid SAS token, but if you attempt to make a request from an IP address that’s been restricted using network rules, then you’re out of luck.
To configure rules, you choose Selected networks, and the rest of the page opens up with a bunch of configuration options.
You use the first section to add any virtual network that you want to allow access in. It’s important not to reverse the logic here. You’re choosing the networks that are allowed to access your storage account. You can also specify subnets within the virtual networks that you choose. You can configure up to 100 of these.
This is the IP address ranges that are allowed to access.
There are a few sensible exceptions you can make for the restrictions imposed by your configuration. You can allow certain trusted Microsoft services, you can allow logging, and you can allow metrics to be read.
CROSS-ORIGIN RESOURCE SHARING (CORS)
The idea here is that we have an application that runs under one domain and we want that to be able to access some kind of resource somewhere else running in another domain. So if you have some app running in domain.com that’s is going to call some other domain , domain that we added to our storage account, so when a user points his browser to domain.com, there might be some kind of resource that needs to be accessed at that another domain, that browser itself is going to check that remote domain to make sure that those actions are okay. You can access CORS under Storage Account –> Settings Section –> CORS
As the name says, this is used for static websites, sites that are composed entirely of unchanging, static files. This is designed for relatively light workflows so there aren’t too many customization options. First step is to enable it. Click on your storage account and under Settings Section you will find Static Website. Click Enable
Once we enable it, we can provide Index document (this will be shown for every folder as default document) and the Error document which is just used for error 404.
When you click on Save a new Blob Storage container named $web will be created and we will be able to use our Primary Endpoint.
I uploaded my basic html file into that container. Now if you copy Primary Endpoint URL and paste it in a browser you will be able to see that page.
Instead of accessing all our blobs under the blob.core.windows.net we can add our own domain and access our blobs using our custom domain. In order to configure that you will need to click on your storage account –> Blob Services –> Custom Domain
So basically to do this you have to prove to Microsoft and validate that you own the domain and they give you 2 ways to do that.
First option is to put a CNAME record on your DNS provider. This way Microsoft can query the DNS and make sure that you own a domain. So we need to create a CNAME and for alias we need to specify our storage account. As they posted, this is an easier method but it results in a little bit of downtime while they verify that info.
Second option which also involve registering CNAME record (this is called indirect CNAME validation), so if you choose second option you will need to tick Use indirect CNAME validation box and proceed with that process.
When we delete blobs from our storage account they are permanently deleted and the only way to restore them is to have backup or replication is in place before deletion. Soft Delete feature gives us the ability to recover our data even if that data is deleted from our storage account. Azure will retain the data for a specific date and you can restore the same at any point in time.
If we overwrite data, a soft deleted snapshot is generated to save the state of the overwritten data so we will have option to go back to the previous state. Soft delete is backwards compatible so you don’t have to make changes to your applications to take advantage of soft delete. If you have archive as well you will not be able to use soft delete on those blobs and if you delete storage account all blobs are gone. One way to prevent this is to enable Locks on your storage account.
Soft Delete is disabled by default on new and existing storage accounts. We can enable it by going to our storage account –> Blob Services –> Soft Delete and click Enable
When we enable soft delete we can define a retention policy of between 1 and 365 days. Once done click Save.
Now I will go and delete image I have in my test container.
Now in order to see our deleted blobs we need to tick the Show deleted blobs box and we will see our deleted blob.
Click on the Ellipsis (…) to see the blob properties, view snapshots (If I modified my original file, that previous version would be stored here and I would be able to recover it) and Undelete. Last option will recover our file.
Keep in mind that the retained snapshots are billed using the normal rates for blob storage in the hot or cool tiers.
That’s it. Now, to avoid very long posts we will continue in the next post. We will discuss about Azure CDN (Content Delivery Network), Monitoring etc..
And again awesome post and explanation. Best blogger on the internet
You have great content on your site Nedim you are awesome. Thanks for knowledge sharing