Need for VCS
Version control systems are a category of software tools that help a software team to manage changes to source code over time. Version control software keeps track of every modifications to the source in a special kind of database.
- Collaboration
*With a VCS, everybody on the team is able to work absolutely freely - on any file at any time
*The VCS will later allow you to merge all the changes into a common version
- Storing versions properly
*A version control system acknowledges that there is only one project
- Restoring previous versions
- Understanding what happened
*Every time you save a new version of your project, your VCS requires you to provide a short description of what was changed
- Backup
Differentiate the three models of VCSs, stating their pros and cons
Local version control systems
- Oldest VCS
- Everything is in your Computer
- Cannot be used for collaborative software development
Centralized version control systems
- Can be used for collaborative software development
- Everyone knows to a certain degree what others on the project are doing
- Administrators have fine grained control over who can do what
- Most obvious is the single point of failure that the centralized server represents
Distributed version control systems
- No single point of failure
- Clients don't just check out the latest snapshot of the files: they fully mirror the repository
- If any server dies, and these systems were collaborating via it, any of the client repositories can be copied back
- Can collaborate with different groups of people in different ways simultaneously within the same project
Git and GitHub, are they same or different? Discuss with facts
Git is a revision control system, a tool to manage your source code history. GitHub is a hosting service for Git repositories. So they are not the same thing: Git is the tool, Github is the service for projects that use Git.
Discuss and compare the pros and cons of different virtualization techniques in different
Compare and contrast the Git commands, commit and push
Since git is a distributed version control system, thedifference is that commit will commit changes to your local repository, whereas push will push changes up to a remote repo. git commit record your changes to the local repository. git push update the remote repository with your local changes.
Discuss the use of staging area and Git directory
Git has three main states that your files can reside in: committed, modified, and staged. Committed means that the data is safely stored in your local database. Modified means that you have changed the file but have not committed it to your database yet. Staged means that you have marked a modified file in its current version to go into your next commit snapshot.
This leads us to the three main sections of a Git project: the Git directory, the working directory, and the staging area.
The Git directory is where Git stores the metadata and object database for your project. This is the most important part of Git, and it is what is copied when you clone a repository from another computer.
The working directory is a single checkout of one version of the project. These files are pulled out of the compressed database in the Git directory and placed on disk for you to use or modify.
The staging area is a simple file, generally contained in your Git directory, that stores information about what will go into your next commit. It’s sometimes referred to as the index, but it’s becoming standard to refer to it as the staging area.
The basic Git workflow goes something like this:
- You modify files in your working directory.
- You stage the files, adding snapshots of them to your staging area.
- You do a commit, which takes the files as they are in the staging area and stores that snapshot permanently to your Git directory.
If a particular version of a file is in the Git directory, it’s considered committed. If it’s modified but has been added to the staging area, it is staged. And if it was changed since it was checked out but has not been staged, it is modified. In Chapter 2, you’ll learn more about these states and how you can either take advantage of them or skip the staged part entirely.
Explain the collaboration workflow of Git, with example
The central repository represents the official project, so its commit history should be treated as sacred and immutable. If a developer’s local commits diverge from the central repository, Git will refuse to push their changes because this would overwrite official commits.
Before the developer can publish their feature, they need to fetch the updated central commits and rebase their changes on top of them. This is like saying, “I want to add my changes to what everyone else has already done.” The result is a perfectly linear history, just like in traditional SVN workflows.
If local changes directly conflict with upstream commits, Git will pause the rebasing process and give you a chance to manually resolve the conflicts. The nice thing about Git is that it uses the same
git status
and git add
commands for both generating commits and resolving merge conflicts. This makes it easy for new developers to manage their own merges. Plus, if they get themselves into trouble, Git makes it very easy to abort the entire rebase and try again (or go find help).
Example
Let’s take a general example at how a typical small team would collaborate using this workflow. We’ll see how two developers, John and Mary, can work on separate features and share their contributions via a centralized repository.
Discuss the benefits of CDNs
• Improving website load times - By distributing content closer to
website visitors by using a nearby CDN server (among other
optimizations), visitors experience faster page loading times. As visitors
are more inclined to click away from a slow-loading site, a CDN can
reduce bounce rates and increase the amount of time that people spend
on the site. In other words, a faster a website means more visitors will
stay and stick around longer.
• Reducing bandwidth costs - Bandwidth consumption costs for website
hosting is a primary expense for websites. Through caching and other
optimizations, CDNs are able to reduce the amount of data an origin
server must provide, thus reducing hosting costs for website owners.
• Increasing content availability and redundancy - Large amounts of
traffic or hardware failures can interrupt normal website function. Thanks
to their distributed nature, a CDN can handle more traffic and withstand
hardware failure better than many origin servers.
• Improving website security - A CDN may improve security by
providing DDoS mitigation, improvements to security certificates, and
other optimizations.
Differences Between CDNs and Web Hosting
- Web Hosting is used to host your website on a server and let users access it over the internet. A content delivery network is about speeding up the access/delivery of your website’s assets to those users.
- Traditional web hosting would deliver 100% of your content to the user. If they are located across the world, the user still must wait for the data to be retrieved from where your web server is located. A CDN takes a majority of your static and dynamic content and serves it from across the globe, decreasing download times. Most times, the closer the CDN server is to the web visitor, the faster assets will load for them.
- Web Hosting normally refers to one server. A content delivery network refers to a global network of edge servers which distributes your content from a multi-host environment.
Identify free and commercial CDNs
free-
1. CloudFlare
2. Incapsula
3. Photon by Jetpack
4. Swarmify
2. Incapsula
3. Photon by Jetpack
4. Swarmify
Commercial-
1.Google Cloud CDN
2.AWS Cloudfront
3.Cloudinary
4.Imgur
5.Microsoft Azure CDN
2.AWS Cloudfront
3.Cloudinary
4.Imgur
5.Microsoft Azure CDN
Discuss the requirements for virtualization
1. Hardware virtualization
• VMs, emulators
2. OS level virtualization (Desktop virtualization)
• Remote desktop terminals
3. Application level virtualization
• Runtimes (JRE/JVM, .NET), engines (games engines)
4. Containerization (also OS/application level)
•Dockers
5. Other virtualization types
•Database, network, storage, etc.
Discuss and compare the pros and cons of different virtualization techniques in different
levels
Pros-
- Using Virtualization for Efficient Hardware Utilization
- Using Virtualization to Increase Availability
- Disaster Recovery
- Save Energy
- Deploying Servers too fast
- Save Space in your Server Room or Datacenter
- Testing and setting up Lab Environment
- Shifting all your Local Infrastructure to Cloud in a day
- Possibility to Divide Services
Cons-
- Extra Costs
- Software Licensing
- Learn the new Infrastructure
Identify popular implementations and available tools for each level of virtualization
Tools
1. Virtual Network User Mode Linux (VNUML)
VNUML (Barham et al. 2003) is an open source and is available to all the users for free download. VNUML is basically a virtualization tool used for multiple virtual systems of Linux operating system. These virtual systems are known as guests which run their applications along with Linux operating system of the original system which is refer to as host.
2. Virtual Box
Virtual Box is used for implementation of virtual
machines on the physical computers and servers. It also does full virtualization in the host computer which means that without any modification in the operating system the guest operating system is executed on the host computer (Geiselhart et al. 2003).
3. VMware Server
It is a source free virtualization tool for Linux as well as Windows operating system (Cox 2007). VMware Server is based on the full virtualization i.e., the physical desktop computer to run more than one virtual machine of varying operating system called guests on it.
4. Qemu
Qemu is used for execution of virtualization in the
operating systems like Linux and Windows both. It is a popular open source (R.&M. 2007) emulator that provides fast emulation by the help of dynamic translation. It has many useful commands for the management of VM.
5. Xen
Xen is also an open source tool for virtualization used widely for Para virtualization in the host PC and guest computers (Bavier et al. 2006).
6. VMware
VMware is a VM (virtual machine) platform which helps in execution of unmodified operating system on the host or user level application hardware . Operating system which is being executed with VMware may get crashed, reinstalled, rebooted or crashed without any effect on the application running on the host computer.
VMware gives the separation of guest operating system from the real host operating system so that if the guest operating system fails then the physical hardware or the host machine does not suffer from the consequences (Fuertes & de Vergara 2007).
VMware is used to produce an illusion of standard
Personal Computer hardware inside the virtual machine. Therefore the VMware is used to execute several unmodified operating systems at the same time on the single hardware machine by executing the operating system in the virtual machine of specific operating system. Instead of indirect running of code on the hardware as in
the case of software simulator, virtual machine executes the code directly on the physical hardware without any application for the interpretation of code.
7. EMF Tool
EMF virtualization tool is an eclipse based plug in on
EMF basis to hold the transparent usage of virtual models all of which are based on EMF. For the creation of a virtual model using the EMF tool, the users have to provide contributing models along with Meta models for the virtualization.
What is the hypervisor and what is the role of it?
A hypervisor or virtual machine monitor (VMM) is computer software, firmware or hardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machine. The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualized hardware resources: for example, Linux, Windows, and macOS instances can all run on a single physical x86machine. This contrasts with operating-system-level virtualization, where all instances (usually called containers) must share a single kernel, though the guest operating systems can differ in user space, such as different Linux distributions with the same kernel.
How does the emulation is different from VMs?
The purpose of a virtual machine is to create an isolated environment.
The purpose of an emulator is to accurately reproduce the behavior of some hardware.
Both aim for some level of independence from the hardware of the host machine, but a virtual machine tends to simulate just enough hardware to make the guest work, and do so with an emphasis on efficiency of the emulation/virtualization. Ultimately the virtual machine may not act like any hardware that really exists, and may need VM-specific drivers, but the set of guest drivers will be consistent across a large number of virtual environments.
An emulator on the other hand tries to exactly reproduce all the behavior, including quirks and bugs, of some real hardware being simulated. Required guest drivers will exactly match the environment being simulated.
Virtualization, paravirtualization, and emulation technology, or some combination may be used for the implementation of virtual machines. Emulators generally can't use virtualization, because that would make the abstraction somewhat leaky.
Compare and contrast the VMs and containers/dockers, indicating their advantages and disadvantages
A virtual machine (VM) is an emulation of a computer system. Put simply, it makes it possible to run what appear to be many separate computers on hardware that is actually one computer.
With containers, instead of virtualizing the underlying computer like a virtual machine (VM), just the OS is virtualized.
No comments:
Post a Comment