To provide the performance transparency and automation there are many tools used by cloud architecture. It allows to manage the cloud architecture and monitor reports. It also allows them to share the application using the cloud architecture. Automation is the key component of cloud architecture which helps to improve the degree of quality.
Hybrid cloud: It consists of multiple service providers. It is a combination of public and private cloud features. It is used by the company when they require both private and public clouds both.
Community Cloud: This model is quite expensive and is used when the organizations having common goals and requirements, and are ready to share the benefits of the cloud service.
To overcome the maintenance cost and to optimize the resources ,there is a concept of three data center in cloud which provides recovery and back-up in case of disaster or system failure and keeps all the data safe and intact.
To communicate between different connectors Amazon SQS message is used, between various components of AMAZON, it acts as a communicator.
In order to make system more efficient against the burst of traffic or load, buffer is used. It synchronizes different component . The component always receives and processes the request in an unbalanced way. The balance between different components are managed by buffer, and makes them work at the same speed to provide faster services.
Hypervisor is a Virtual Machine Monitor which manages resources for virtual machines. There are mainly two types of hypervisors
Type 1: The guest Vm runs directly over the host hardware, eg Xen, VmWare ESXI
Type 2: The guest Vm runs over hardware through a host OS, eg Kvm, oracle virtualbox
The individuals and groups within your business unit that use different types of cloud services to get a task accomplished. A cloud consumer could be a developer using compute services from a public cloud.
Users who often take advantage of services that your business has created within a cloud environment. The end-users of your service have no idea that you’re using a public or private cloud. As long as the users are concerned, they’re interacting directly with the services and value.
Cloud service providers are the commercial vendors or companies that create their own capabilities. The commercial vendors sell their services to cloud consumers. In contrast to this, a company might decide to become an internal cloud service provider to its own partners, employees, and customers, either as an internal service or as a profit center. Cloud service providers also create applications or services for such environments.
The cloud computing architecture is all the components of a cloud model that fit together from an architectural perspective. The figure below depicts how the various cloud services are related to support the needs of businesses. On the left side, the cloud service consumer represents the types of uses of cloud services. No matter what the requirements of the particular constituent are, it is important to bring the right type of services together that can support both internal and external users. Management of the consumers should be able to make services readily available to support the changing business needs. The applications, middleware, infrastructure, and services that are built based on on-premises computing models are within this category. In addition to this, the model depicts the role of a cloud auditor. This organization provides an oversight either by an internal or external group which makes sure that the consumer group meets its obligations.
Cloud storage device mechanisms provide common levels of data storage, such as:
<> Files – These are collections of data that are grouped into files that are located in folders.
<> Blocks – A block is the smallest unit of data that is individually accessible. It is the lowest level of storage and the closest to the hardware.
<> Datasets – Data sets organized into a table-based, delimited, or record format.
<> Objects – Data and the associated metadata with it are organized as web-based resources.
Each of the above data storage levels is associated with a certain type of technical interface. This interface corresponds to a particular type of cloud storage device and the cloud storage service used to expose its API.
Serverless components in cloud computing allow the building of applications to take place without the complexity of managing the infrastructure. One can write code without having provision to a server.
Serverless machines take care of virtual machines and container management. Multithreading, hardware allocating are also taken care of by the serverless components.
Serverless computing has the following advantages and disadvantages:
Advantages:
<> It is cost-effective.
<> The operations on serverless computing are simplified.
<> Serverless computing helps boost productivity.
<> It offers scaling options.
<> It involves zero server management.
Disadvantages:
<> Serverless code can cause response latency.
<> It is not ideal for high-computing operations because of resource limitations.
<> For serverless computing, the responsibility of security comes under the service company and not the consumer, which might be more vulnerable.
<> Debugging serverless code is a bit more challenging
There are several areas of technology that contribute to modern-day cloud-based platforms. These are known as cloud-enabling technologies. Some of the cloud-enabling technologies are:
1. Broadband Networks and Internet Architecture
2. Data Center Technology
3. (Modern) Virtualization Technology
4. Web Technology
5. Multitenant Technology
6. Service Technology
Microservices is a process of developing applications that consist of code that is independent of each other and of the underlying developing platform. Each microservice runs a unique process and communicates through well-defined and standardized APIs, once created. These services are defined in the form of a catalog so that developers can easily locate the right service and also understand the governance rules for usage.
The reason why microservices are so important for a true cloud environment is because of these four key benefits:
<> Each microservice is built to serve a specific and limited purpose, and hence application development is simplified. Small development teams can then focus on writing code for some of the narrowly defined and easily understood functions.
<> Code changes will be smaller and less complex than with a complex integrated application, making it easier and faster to make changes, whether to fix a problem or to upgrade service with new requirements.
<> Scalability — Scalability makes it easier to deploy an additional instance of a service or change that service as needs evolve.
<> Microservices are fully tested and validated. When new applications leverage existing microservices, developers can assume the integrity of the new application without the need for continual testing.
The cloud usage monitor mechanism is an autonomous and lightweight software program that is responsible for collecting and processing the IT resource usage data.
Cloud usage monitors can exist in different formats depending on what type of usage metrics these are designed to collect and how the usage data needs to be collected. The following points describe 3 common agent-based implementation formats.
Monitoring Agent
Resource Agent
Polling Agent
An intermediary and an event-driven program that exists as a service agent and resides along the existing communication paths is a monitoring agent. It transparently monitors and analyzes dataflows. Commonly, the monitoring agent is used to measure the network traffic and also message metrics.
A processing module that is used to collect usage data by having event-driven interactions with the specialized resource software, is a resource agent. This agent is applied to check the usage metrics based on pre-defined, observable events at the resource software level, like initiating, suspending, resuming, and vertical scaling.
A processing module that gathers cloud service usage data by polling IT resources is called a polling agent. The polling agent has also been used to timely monitor the IT resource status, like uptime and downtime.
Each of these can be designed to forward collected usage data to a log database for post-processing and for reporting purposes.
‘Cloud native’ is a software framework designed with containers, microservices, dynamic orchestration, and also continuous delivery of software. Every part of the cloud-native application has within it its own container and is dynamically orchestrated with other containers to optimize the way the resources are utilized.
The Cloud Native Computing Foundation gives a clear definition of cloud-native:
Container packaged: This means a standard way to package applications that is resource-efficient. By using a standard container format, more applications can be densely packed.
Dynamically managed: This means a standard way to discover, deploy, and scale up and down containerized applications.
Microservices oriented: This means a method to decompose the application into modular, independent services that interact through well-defined service contracts.
An API gateway allows multiple APIs to act together as a single gateway to provide a uniform experience to the user. In this, each API call is processed reliably. The API gateway manages the APIs centrally and provides enterprise-grade security. Common tasks of the API services can be handled by the API gateway. These tasks include services like statistics, rate limiting, and user authentication.
Rate Limiting is a way to limit the network traffic. Rate limiting runs within the app rather than the server. It typically tracks the IP addresses and the time between each request.
It can eliminate certain suspicious and malicious activities. Bots that impact a website can also be stopped by Rate Limiting. This protects against API overuse which is important to prevent.
1>> Hybrid clouds refer to the combination of public and private clouds bounded together by technology. However, by allowing data and applications for moving between private and public clouds, a hybrid cloud gives your business greater flexibility, more deployment options, and helps in optimizing your existing infrastructure, security, and compliance.
2>> And, Hybrid IT refers to an approach for enterprising computing. In this, the organization provides and manages some IT resources within but uses cloud-based services for others. However, the hybrid approach allows an enterprise for maintaining a centralized approach to IT governance, while testing with cloud computing.
VPN stands for Virtual Private Network. VPN is a private cloud that manage the security of the data during the communication in the cloud environment. With VPN, you can make a public network as private network.
This is the most obvious question accurued in mind that if the cloud data is secure; To ensure that, check that there is no data leak with the encryption key implemented with the data you sending while the data moves from point A to point B in cloud.
Security is important when it comes to applications and services used by the user. However, there are many levels of security that the cloud offers:
1. Identity management
This allows and authorizes the application service or hardware component to be used by authorized users.
2. Access control
This checks the permissions that have to be provided to the users so that they can control the access of other users entering the cloud environment.
3. Authorization and authentication
This provides provision for allowing the authorized and authenticated people only for accessing and changing the applications and data.
The Cloud computing platform has various open-source databases. Some of them are:
1. MongoDB
This refers to an open-source database system that is document-oriented and schema-free. This is written in C++ language and has the support of tables with high storage space.
2. CouchDB
This open[-source database is based on the Apache server. This is used for storing the data efficiently.
3. LucidDB
This database is developed in Java/C++ for data warehousing. This offers various features and functionalities for handling and maintaining a data warehouse.
Virtualization refers to the basis of cloud computing. There are various platforms like VMware that offer technology for creating a private cloud and providing a path for connecting external clouds with private clouds. However, there are three main features that must be identified for creating a private cloud:
<> Cloud operating system.
<> Manage the Service level policies.
<> Virtualization.
<> Memcache helps in enabling a user to work using handy object-oriented and procedural interfaces. This has the ability to reduce database load in dynamic web applications.
<> Memcached has the support of libMemcached library for providing API for communicating with Memcached servers.
Data encapsulation is used for performing a restricted set of operations in which it breaks down information into smaller and manageable chunks before their transmission across the network. This is also known as data hiding which keeps the class implementation details hidden from the users.
OpenStack refers to an open-source cloud computing element serving IaaS (Infrastructure as a Service). This has control over big chunks of computing, storage, and networking resources that are managed using APIs or a dashboard.
The Recovery Time Objective is the maximum time a company has accepted to wait for recovery when a system fails in the cloud. This contract is between the cloud provider and the client.
A Recovery Point Object is the maximum amount of data loss that the organisation can accept in its contract. The data loss is measured in time.
VPC manages storage and compute resources for organisations by providing a layer of isolation and abstraction.
The architecture for VPC with public and private subnets is as follows : -
Creating a new VPC instance
A VPC comes by default with these components :
> Route table
> Network ACL
> Security Groups
Data can be encrypted in S3 using SSE-S3, SSE-C, SSE-KMS.
SSE-S3 provides the solution S3 oversees Key management and protection using multiple layers of security.
SSE-C lets S3 perform encryption and decryption of data and control the key used for encryption. Key management and storage are implementation-dependent and not provided by AWS.
SSE-KMS uses the Amazon Key Management service to store the keys used in encryption. KMS also provides an additional layer of security by keeping master keys. Special permission is needed to be able to use the master key.
Application Load Balancer (ALB) - ALB allows routing based on port numbers. It can also route requests to Lambda, and it can direct requests to many ports on the target. Application Load Balancer supports only layer 7 - HTTP/2 and Web Sockets. It can return primary responses on its own so the server can be set free of replying to redundant requests. ALB find use in Microservices and application
Network Load Balancer (NLB) - Network Load Balancer supports Layer 4 that is TCP and UDP. It is faster and high-performance since it is lower in the OSI model. It uses static IPs and can also be assigned elastic IPs. An example would be real-time data streaming or video streaming.
Classic Load Balancer (CLB) or Elastic Load Balancer (ELB version1) - ELB is the oldest Load balancer and the only one which offers application-specific sticky session cookies. It works both on Layer 7 and Layer 4. ELB also supports EC2-Classic.
Memory-Optimized Instances - They provide fast performance for applications that process Bigdata in memory. Memory Optimised instance includes support for enhanced networking, up to 25gbps of Network Bandwidth. They come packaged with EBS installed and optimised.
Use cases are in-memory caches and open-source databases.
Compute Optimised Instances - Compute Optimised instances provide high-performance computing resources and fast batch-processing. They are used ideally for media transcoding, gaming servers, ad-server engines. Compute Optimised Instances use the AWS Nitro system, which combines dedicated hardware and lightweight hypervisors. Just like Memory-optimized, Compute Optimised Instances come with optimised EBS as well.
Accelerated Computing Instances - These Instances use co-processors and hardware accelerators to improve upon the performance. They get used in graphics processing, floating-point calculations, data pattern matching. Accelerated Computing Instances use extra hardware power to combat software limitations and latency. These also support the Elastic Fabric Adapter (EFA)
Storage Optimised Instances - Storage Optimised instances are ideal for workloads that need high sequential read and write. These instances use their local storage to store data.
Storage optimised instances provide low latency and high-speed random I/O operations. They get used in NoSQL databases like Redis, MongoDB, data warehousing.
General Purpose instances provide a mixture of computing, memory, and networking resources. General Purpose Instance find their use in applications that consume multiple resources in equal proportions, for example, web servers, code repositories.
CloudFormation helps in creating and maintaining an AWS infrastructure and stacks. Stacks are a collection of AWS services. And CloudFormation enables users in creating stacks quickly with minor overhead. One could ideally configure the AWS infrastructure through a text or JSON file in Cloud Formation.
Amazon AWS provides Shields for security against attacks. AWS Shields uses two tiers of security- Standard and Advanced.
Standard AWS Shield, which comes by default with AWS, can be used as a first-measure security gate. It protects network and transport layers.
Subsequently, one can also subscribe to Shield Advanced for another layer of added security. The AWS Advanced Shield provides integration with AWS Web Application Firewall (WAF). AWS WAF provides custom rules to filter out traffic with threat signatures.
Web Application Firewall provides three main actions: allow all requests for a rule, block all requests, and count all requests for a new policy.
Microsoft Azure is a cloud platform that provides cloud services for helping in bringing new solutions by building, running, and managing applications over multiple clouds. This offers services like content delivery networks (CDNs), Virtual Machines (VMs), and more.
Aurora is the database engine that gives reliability and speed at par with industry-standard databases. It backs up data in AWS S3 in real-time without any performance impact. It backs up storage in a routine fashion without the hassle of Database administrators interfering.
RDS (Amazon Relational Database System) is the traditional relational database that provides scalability and cost-effective solutions for storing data. It supports six database engines, i.e. MySQL, Postgres, Amazon Aurora, MariaDB, Microsoft SQL Server, Oracle.
<> ELB isn’t compatible with EKS containers running on Fargate.
It can route traffic on more than one port in an instance
<> ELB doesn’t support forwarding data to IP addresses. It can only forward it to the EKS/ECS container or EC2 instance.
<> It also doesn’t support web sockets.
<> In ELB, there is no concept of target groups in routing.
Target groups are another layer of abstraction and redirection created in load balancers. They are tagged into three types while creating target groups - instances (marked by instance ID), IP address, and Lambda functions. Dedicated listeners listen to the traffic coming in the load balancer and route traffic to the appropriate target group. The target group then routes the data to specific IPs, instances, and containers. The Target group checks the health of its targets and figures how to split the oncoming traffic.
AMI is an image of the operating system, application server, and applications that will run on the cloud server. Running an image will create an instance on the cloud. AMI includes metadata of the root volume. It also has permission to decide on AWS instances that will be allowed to run it.
Lambda is a computing service that runs your code without requiring and managing servers. Each client request will instantiate a new lambda function. Lambda charges you for only the time when the code was running.
Vertical scaling is adding more RAM and processor power to the existing machine. AWS provides instances up to 488 GB ram or 128 virtual cores. To initiate vertical scaling, one first needs to decouple the tiers in our application. These tiers will be distributed and handled by web servers. Finally, we can run the newer instance and make appropriate changes reflecting the vertical scaling.
AWS Ops Automator automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. The solution can resize the instances by restarting your existing instance with a new size. Or, the solution can resize your instances by replacing the current instance with a new, resize instance.
MariaDB, Postgres, MongoDB, Oracle, MySQL are some common databases used in AWS.
EC2 instances use Elastic IP for communication over the internet or in a private network. You first allocate to your AWS account and then can map it to instances. It can avoid system failure when a resource fails. Fail-safe is ensured by remapping the faulted instance’s IP to a new instance.
Route53 is a highly scalable and available domain name system (DNS) provided under Amazon AWS. These are the three prime functions
a. Register domain names
b. Route internet traffic to resources
c. Send health checks to resources and devices.
Elastic BeanStalk acts as a console for deploying Amazon services by reducing the complexity of the environment. It takes care of the setup of resources. One simply has to upload the app, and Elastic Beanstalk takes care of the setup and managing resources/services. Services like load balancing, scaling, health monitoring, and on-demand provisioning get handled by EBS.