Cloud Computing Models


While cloud computing is still a relatively new technology, there are generally three cloud service models, each with a unique focus. The American National Institute of Standards and Technology (NIST) defined the following cloud service models:


Software as a service (SaaS)

This capability that is provided to the consumer is to use the applications that a provider runs on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface, such as a web browser (for example, web-based email). The consumer does not manage or control the underlying cloud infrastructure, including the network, servers, operating systems, storage, or even individual application capabilities. One possible exception is for the consumer to continue the control of limited user-specific application configuration settings.

Platform as a service (PaaS)

This capability that is provided to the consumer is to deploy consumer-created or acquired applications onto the cloud infrastructure. Examples of these types of applications include those that are created by using programming languages and tools that are supported by the provider. The consumer does not manage
or control the underlying cloud infrastructure, including the network, servers, operating systems, or storage. But, the consumer has control over the deployed applications and possibly application-hosting environment configurations.

Infrastructure as a service (IaaS)

This capability that is provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software. These resources can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure, but has control over operating systems, storage, and deployed applications. The consumer might also have limited control of select networking components (for example, hosts).

Below figure shows these cloud models:


Cloud Computing Deploying Models:


Private cloud

The cloud infrastructure is owned or leased by a single organization and is operated solely for that organization.

Community cloud

The cloud infrastructure is shared by several organizations and supports a specific community that shares (for example, mission, security requirements,policy, and compliance considerations).

Public cloud

The cloud infrastructure is owned by an organization that sells cloud services to the general public or to a large industry group.

Hybrid cloud

The cloud infrastructure is a composition of two or more clouds (internal, community, or public) that remain unique entities. However, these entities are bound together by standardized or proprietary technology that enables data and application portability (for example, cloud bursting).

Below figure shows cloud computing deploying models:

What is Cloud ?


Cloud computing is a model for enabling universal, convenient, on-demand network access to a shared pool of configurable computing resources (for example: networks, servers, storage, applications, and services). These resources can be rapidly provisioned and released with minimal management effort or service provider interaction. Below figure shows an overview of cloud computing.


Cloud computing provides computation, software, data access, and storage services that do not require user knowledge of the physical location and configuration of the system that delivers the services.


Private and public cloud

A cloud can be private or public. A public cloud sells services to anyone on the Internet. A private cloud is a proprietary network or a data center that supplies hosted services to a limited number of people. When a service provider uses public cloud resources to create their private cloud, the result is called a virtual private cloud. Whether private or public, the goal of cloud computing is to provide easy, scalable access to computing resources and IT services.


Cloud computing components

We describe the cloud computing components, or layers, in our model.

A cloud has four basic components

1. Cloud Services
2. Cloud Infrastructure
3. Cloud Platform
4. SAN + Storage


Cloud Services

This layer is the service that is delivered to the client, it can be an application, a desktop, a server, or disk storage space. The client does not need to know where or how their service is running, they just use it.


Cloud Infrastructure

This layer can be difficult to visualize depending on the final delivered service. If the final service is a chat application, the cloud infrastructure is the servers on which the chat application is running. In the other case, if the final service is a virtualized server, the cloud infrastructure is all the other servers that are required to provide “a server” as a service to the client. Examples of these types of servers include: domain name server (DNS), security services, and management.


Cloud Platform

This layer consists of the selected platform to build the cloud. There are many vendors like IBM Smart Business Storage Cloud, VMware vSphere, Microsoft Hyper V, and Citrix Xen Server, which are well known cloud solutions in the market.


SAN + Storage

This layer is where information flows and lives. Without it, nothing can happen. Depending on the cloud design, the storage can be any of the previously presented solutions. Examples include: Direct-attached storage (DAS), network-attached storage (NAS), Internet Small Computer System Interface (iSCSI), storage area network (SAN), or Fibre Channel over Ethernet (FCoE). For the purpose of this book, we describe Fibre Channel or FCoE for networking and compatible storage devices.

Netbackup Architecture


NetBackup is available in three editions to support organizations ranging from SMBs to large enterprises.
See the table below:

There are two primary architectures deployed by NetBackup
1. Two-Tier (NetBackup Enterprise Server)
2. Three-Tier (NetBackup Server)


To make things easy to understand I will explain Three Tier Architecture first.

NetBackup Enterprise Server Architecture (Three-Tier) 

A typical NetBackup Enterprise Server storage domain consists of one NetBackup Master Server, one or more NetBackup Media Servers, and multiple NetBackup Clients. Large customers often have multiple NetBackup Storage domains spread across data centers.


NetBackup Master Server 
  • Main Backup Server - Manage all operations (Policies, Schedules, Device Configuration, & Catalog)
  • Backs Up and Restores Local and Client Data to Attached Storage (Tape / Disk)
  • Manages Attached (Direct / SAN) Storage 

The NetBackup Master Server is the “brains” for all data protection activities, from scheduling and tracking client backups to managing tape media and more. In addition, the NetBackup Master Server may have one or more storage devices attached for backing up its local data or NetBackup client data across the network. 

NetBackup Media Servers 
  • Backs Up and Restores Local and Client Data to Attached Storage (Tape/Disk)
  • Manages Attached (Direct / SAN) Storage 

Organizations with data in several locations, or with data-intensive applications, may distribute the workload across servers by deploying a NetBackup Enterprise Server as a Media Server. The Master Server is the brains, and the Media Servers are the workhorses of deployments. A NetBackup Media Server can share tape resources with NetBackup Master/Media Servers over a storage area network (SAN) or provide for backup and recovery from its own dedicated storage hardware. If a NetBackup Media Server fails, the attached NetBackup Client backups will be routed to another NetBackup Media Server. 

NetBackup Clients 
  • Prepares and Sends Local Data to NetBackup Master / Media Servers for Storage
  • Receives Stored Data from NetBackup Master / Media Servers for Restoration 

Each system to be protected requires a NetBackup client. The NetBackup client sends and receives data across the local area network (LAN) or storage area network (SAN) to a NetBackup Media Server. 

Each NetBackup Enterprise Server or NetBackup Server license includes the necessary licenses to backup NetBackup data (not other application data). 


NetBackup Server Architecture (Two-Tier) 

Two-tier NetBackup architecture is defined as a single NetBackup server that acts as both a Master and Media Server (Master/Media Server) and multiple clients.

NetBackup Server supports one backup server and an unlimited number of clients, NDMP-NAS systems, tape drives, and tape libraries. 

Ref: Netbackup Admin guides
For more info: https://www-secure.symantec.com/connect/sites/default/files/b-whitepaper_nbu_architecture_overview_12-2008.en-us.pdf

Deduplication on Windows Server 2012


1. Deduplication is not enabled by default. You must enable it and manually configure it in Server Roles | File And Storage Services | File and iSCSI Services. Once enabled, it also needs to be configured on a volume-by-volume basis.


2. Content is only deduplicated after n number of days, with n being 5 by default, but this is user-configurable.


3. Deduplication can be constrained by directory or file type. If you want to exclude certain kinds of files or folders from deduplication, you can specify those as well.


4. The deduplication process is self-throttling and can be run at varying priority levels. You can set the actual deduplication process to run at low priority and it will pause itself if the system is under heavy load. You can also set a window of time for the deduplicator to run at full speed, during off-hours, for example.


5. All of the deduplication information about a given volume is kept on that volume, so it can be moved without injury to another system that supports deduplication. If you move it to a system that doesn't have deduplication, you'll only be able to see the nondeduplicated files. The best rule is not to move a deduplicated volume unless it's to another Windows Server 2012 machine.


6. If you have a branch server also running deduplication, it shares data about deduped files with the central server and thus cuts down on the amount of data needed to be sent between the two.


7. Backing up deduplicated volumes :

A block based backup solution or a disk image backup method should work as it will preserve all deduplication data.

File-based backups will also work, but they won't preserve deduplication data unless they're dedupe-aware. They'll back up everything in its original, discrete, undeduplicated form. What's more, this means backup media should be large enough to hold the undeduplicated data as well.

The native Windows Server Backup solution is dedupe-aware.


8. The type of data that is being deduplicated plays a major factor in whether it will be better to use native deduplication or backup software deduplication. There are restrictions on the type of data that you can deduplicate using Windows server 2012. The reason for this is that backup software does not usually alter the original data. Instead, it focuses on removing redundancy before the data is sent to the backup server.


Windows Server 2012's deduplication does alter the original data. That being the case, there are some types of data that are poor candidates for deduplication. Specifically, Microsoft recommends that you do not deduplicate Hyper-V hosts, Exchange Servers, SQL Servers, WSUS servers or volumes containing files that are 1 TB in size or larger. The essence of this recommendation is that volumes containing large amounts of locked data (such as a database) or rapidly changing data tend not to be good for deduplication.