Export Jobs – Azure Storage

Data from Azure Blob Storage can be transferred to an on-premises datacenter using an export job. You need to ship the disks to the Azure datacenter, and once the data is transferred, the Azure datacenter staff will ship the disks back to you. Figure 6.29 shows the end-to-end process involved in an export job.

FIGURE 6.29 Export job workflow

The following are the steps required to move data from Azure storage to on-premises using an export job:

  1. Identify the storage account and data you need to export.
  2. Compute the number of disks required to accommodate the data you want to transfer.
  3. Referencing the storage account, create an export job from the Azure portal.
  4. Specify the blobs that are part of the transfer process, return address, and your carrier account number. This will help Microsoft to ship the disks back to you once the transfer is complete.
  5. Ship the disks to the Azure datacenter where your storage account is residing. Also, update the export job with the tracking number of the shipment.
  6. Azure datacenter staff will transfer the data from the storage account to disks as soon as they receive the shipment.
  7. After copying the data, the drive will be encrypted using BitLocker and be shipped back to you.
  8. Once you receive the shipment, you can decrypt the disks using the BitLocker recovery keys available in the Azure portal.

For both import and export jobs, you need to use the Import/Export tool, which is also known as the WAImportExport tool. Next, we will quickly cover the tool.

WAImportExport Tool

The Azure Import/Export tool is also known as the WAImportExport tool and is what you use to prepare the drives and to repair the drives that you use with the Azure Import/Export service. The following are the functions of the tool:

  • Copy data to hard drives that will be shipped to the Azure datacenter
  • Repair data that is imported to Azure Blob Storage
  • Repair files on the drives when you receive the disks with data from an export job

The requirements for WAImportExport tool are as follows:

  • 64-bit Windows client or server
  • Internal SATA II/III HDDs or SSDs

Data copy, volume encryption, and the creation of journal files are all handled by the WAImportExport tool to ensure the integrity of the data. Journal files are essential for both import and export jobs.

Summary

This chapter focused on Azure Storage; we started the chapter with Azure Storage accounts and the storage services. We have five main services that are offered by Azure Storage for blobs, queues, tables, files, and disks. You explored the different use-case scenarios of these services.

One of the main features of Azure Storage is the replication methods and the durability it offers. So, we couldn’t continue the chapter without discussing these methods. All replication methods and the number of copies stored in each method were explained. Azure Storage can be accessed from anywhere in the world over an HTTP/HTTPs connection. Understanding the endpoints and securing them is important for administrators. We covered the ways to secure our endpoints and to associate custom domains.

In the second half of the chapter, we shifted our focus to two of the storage services: Azure Blob Storage and Azure File Storage. In Azure Blob Storage, you learned the hierarchy of the blob storage, access tiers for storage cost optimization, lifecycle management for automated transition of access tiers based on the modified data, and how to upload files to Blob Storage. In the first exercise, you uploaded a file, and it was available on the public endpoint. It was very important to understand the storage security and how to access the files privately. We covered Storage Service Encryption, which is an encryption provided by Azure, and then we covered storage access signatures for controlling access to our storage services.

After Blob Storage, you studied Azure File Storage. We discussed how to create a file share and mount the file share to a Windows/Linux computer. The next topic of interest was File Sync; you had a chance to understand File Sync, the components of File Sync, and how to work with File Sync. Microsoft is recommending Azure Files as an enterprise-grade file share, so it’s necessary to understand the recovery options like snapshots and backup for Azure Files.

The last section of the chapter was all about getting familiar with the toolsets available for managing Azure Storage. We discussed Azure Storage Explorer, a tool with a graphical interface that can be used to move files from and to Azure Storage. Second, we covered AzCopy, a command-line tool, which is ideal for scripting and automating the data movement to and from Azure Storage. The first two tools require an Internet connection, and you cannot use them for moving larger terabytes of data. To move a large amount of data without worrying about bandwidth, you have the Import/Export service. Based on the direction of data flow, the import or export job needs to be created in the Azure portal.

From the previous chapters, you learned Azure networking and Azure storage; now it’s time to explore the third pillar of Azure infrastructure, i.e., Azure compute. In the next two chapters, you will be working with Azure VMs and how you can automate the deployment of the resources using ARM templates. Though you deployed VMs throughout our exercises, you didn’t get a chance to learn about virtual machines. Now the time has arrived; in the next chapter you will explore Azure virtual machines.