The operating system (OS) is the most important program that runs on a computer. Every general-purpose computer must have an operating system to run other programs and applications. Computer operating systems perform basic tasks, such as recognizing input from the keyboard, sending output to the display screen, keeping track of files and directories on the disk, and controlling peripheral devices such as printers.
For large systems, the operating system has even greater responsibilities and powers. It is like a traffic cop -- it makes sure that different programs and users running at the same time do not interfere with each other. The operating system is also responsible for security, ensuring that unauthorized users do not access the system.
Types of OS
Operating system is a platform between hardware and user which is responsible for the management and coordination of activities and the sharing of the resources of a computer. It hosts the several applications that run on a computer and handles the operations of computer hardware. There are different types of operating systems. These are as follows:1. Real-time Operating System: It is a multitasking operating system that aims at executing real-time applications.
2. Multi-user and Single-user Operating Systems: The operating systems of this type allow a multiple users to access a computer system concurrently.
3. Multi-tasking and Single-tasking Operating Systems:When a single program is allowed to run at a time, the system is grouped under a single-tasking system, while in case the operating system allows the execution of multiple tasks at one time, it is classified as a multi-tasking operating system.
4. Distributed Operating System: An operating system that manages a group of independent computers and makes them appear to be a single computer is known as a distributed operating system.
5. Embedded System: The operating systems designed for being used in embedded computer systems are known as embedded operating systems.
Functions
Operating System is a software program that acts as an interface between the
user and the computer. It is a software package which allows the computer to
function.
Functions:
- Program creation
- Program execution
- Access to Input/Output devices
- Controlled access to files
- System access
- Error detection and response
- Interpreting the commands
- Managing peripherals
- Memory management
- Processor management
- Information management
- Process communication
- Netoworking
Types of Operating System
- DOS (Disk Operating System)
- UNIX
- LINUX
- Windows
- Windows NT
Multiprogramming, Multiprocessing, Multitasking, and Multithreading
When you approach operating system concepts there might be several confusing terms that may look similar but in fact refer to different concepts.In this post, I will try to clarify four of such terms which often cause perplexity: those are multiprogramming, multiprocessing, multitasking, and multithreading.
In a modern computing system, there are usually several concurrent application processes which compete for (few) resources like, for instance, the CPU. As we have already introduced, the Operating System (OS), amongst other duties, is responsible for the effective and efficient allocation of those resources. Generally speaking, the OS module which handles resource allocation is called scheduler. On the basis of the type of OS to be realized, different scheduling policies may be implemented.
Multiprogramming
In a multiprogramming system there are one or more programs loaded in main memory which are ready to execute. Only one program at a time is able to get the CPU for executing its instructions (i.e., there is at most one process running on the system) while all the others are waiting their turn.
The main idea of multiprogramming is to maximize the use of CPU time. Indeed, suppose the currently running process is performing an I/O task (which, by definition, does not need the CPU to be accomplished). Then, the OS may interrupt that process and give the control to one of the other in-main-memory programs that are ready to execute (i.e. process context switching). In this way, no CPU time is wasted by the system waiting for the I/O task to be completed, and a running process keeps executing until either it voluntarily releases the CPU or when it blocks for an I/O operation. Therefore, the ultimate goal of multiprogramming is to keep the CPU busy as long as there are processes ready to execute.
Note that in order for such a system to function properly, the OS must be able to load multiple programs into separate areas of the main memory and provide the required protection to avoid the chance of one process being modified by another one. Other problems that need to be addressed when having multiple programs in memory is fragmentation as programs enter or leave the main memory. Another issue that needs to be handled as well is that large programs may not fit at once in memory which can be solved by using pagination and virtual memory. Please, refer to this article for more details on that.
Finally, note that if there are
N
ready processes and all of those are highly CPU-bound (i.e., they mostly
execute CPU tasks and none or very few I/O operations), in the very worst case
one program might wait all the other N-1
ones to complete before executing.Multiprocessing
Multiprocessing sometimes refers to executing multiple processes (programs) at the same time. This might be misleading because we have already introduced the term “multiprogramming” to describe that before.
In fact, multiprocessing refers to the hardware (i.e., the CPU units) rather than the software (i.e., running processes). If the underlying hardware provides more than one processor then that is multiprocessing. Several variations on the basic scheme exist, e.g., multiple cores on one die or multiple dies in one package or multiple packages in one system.
Anyway, a system can be both multiprogrammed by having multiple programs running at the same time and multiprocessing by having more than one physical processor.
Multitasking
Multitasking has the same meaning of multiprogramming but in a more general sense, as it refers to having multiple (programs, processes, tasks, threads) running at the same time. This term is used in modern operating systems when multiple tasks share a common processing resource (e.g., CPU and Memory). At any time the CPU is executing one task only while other tasks waiting their turn. The illusion of parallelism is achieved when the CPU is reassigned to another task (i.e. process or thread context switching).
There are subtle differences between multitasking and multiprogramming. A task in a multitasking operating system is not a whole application program but it can also refer to a “thread of execution” when one process is divided into sub-tasks. Each smaller task does not hijack the CPU until it finishes like in the older multiprogramming but rather a fair share amount of the CPU time called quantum.
Just to make it easy to remember, both multiprogramming and multitasking operating systems are (CPU) time sharing systems. However, while in multiprogramming (older OSs) one program as a whole keeps running until it blocks, in multitasking (modern OSs) time sharing is best manifested because each running process takes only a fair quantum of the CPU time.
Multithreading
Up to now, we have talked about multiprogramming as a way to allow multiple programs being resident in main memory and (apparently) running at the same time. Then, multitasking refers to multiple tasks running (apparently) simultaneously by sharing the CPU time. Finally, multiprocessing describes systems having multiple CPUs. So, where does multithreading come in?
Multithreading is an execution model that allows a single process to have multiple code segments (i.e., threads) run concurrently within the “context” of that process. You can think of threads as child processes that share the parent process resources but execute independently. Multiple threads of a single process can share the CPU in a single CPU system or (purely) run in parallel in a multiprocessing system
Why should we need to have multiple threads of execution within a single process context?
Well, consider for instance a GUI application where the user can issue a command that require long time to finish (e.g., a complex mathematical computation). Unless you design this command to be run in a separate execution thread you will not be able to interact with the main application GUI (e.g., to update a progress bar) because it is going to be unresponsive while the calculation is taking place.
Of course, designing multithreaded/concurrent applications requires the programmer to handle situations that simply don’t occur when developing single-threaded, sequential applications. For instance, when two or more threads try to access and modify a shared resource (race conditions), the programmer must be sure this will not leave the system in an inconsistent or deadlock state. Typically, this thread synchronization is solved using OS primitives, such as mutexes and sempaphores.
Real-time operating system
A real-time operating system
(RTOS; commonly pronounced as "are-toss") is a multitasking operating system designed for real-time applications. Such applications
include embedded systems, industrial robots,
scientific research equipment and others.
An RTOS simplifies the creation of a
real-time applications, but does not guarantee
the final result will be real-time; this requires good development of the software.
Real-time operating systems use
specialized scheduling algorithms
in order to provide the real-time applications. An RTOS can respond more quickly
and/or predictably to an event than other operating systems.
The main features of an RTOS are minimal interrupt latency and a minimal thread switching latency.
The basic two designs for
RTOS are:
- Event-driven (priority scheduling) designs: switch tasks only when an event of higher priority needs service, called pre-emptive priority.
- Time-sharing designs: switch tasks on a clock interrupt, and on events, called round robin.
Memory management
Memory management is the act of managing computer
memory. The essential requirement of memory management is to provide
ways to dynamically allocate portions of memory to programs at their request,
and free it for reuse when no longer needed. This is critical to any advanced
computer system where more than a single process might be underway at
any time.[1]
Modern general-purpose computer systems manage
memory at two levels: at the system level (see memory management
(operating systems)); and at the application level (as discussed in
this article). Application-level memory management is generally categorized as
either automatic memory management, usually involving garbage collection
(computer science), or manual memory management.
Computer Memory
Memory is major part of computers that categories
into several types. Memory is best storage part to the computer users to save
information, programs and etc, The computer memory offer several
kinds of storage media some of them can store data temporarily and some them
can store permanently. Memory consists of instructions and the data saved into
computer through Central Processing Unit (CPU).
Types of Computer Memorys:
Memory is the best essential element of a
computer because computer can’t perform simple tasks. The performance of
computer mainly based on memory and CPU. Memory is internal storage media of
computer that has several names such as majorly categorized into two types,
Main memory and Secondary memory.
1. Primary Memory / Volatile Memory.
2. Secondary Memory / Non Volatile Memory.
1. Primary Memory / Volatile Memory:
Primary Memory also called as volatile memory
because the memory can’t store the data permanently. Primary memory select any
part of memory when user want to save the data in memory but that may not be
store permanently on that location. It also has another name i.e. RAM.
Random Access Memory (RAM):
The primary storage is referred to as random
access memory (RAM) due to the random selection of memory locations. It
performs both read and write operations on memory. If power failures happened
in systems during memory access then you will lose your data permanently. So,
RAM is volatile memory. RAM categorized into following types.
- DRAM
- SRAM
- DRDRAM
2. Secondary Memory / Non Volatile Memory:
Secondary memory is external and permanent memory
that is useful to store the external storage media such as floppy disk,
magnetic disks, magnetic tapes and etc cache devices. Secondary memory deals
with following types of components.
Read Only Memory (ROM) :
ROM is permanent memory location that offer huge
types of standards to save data. But it work with read only operation. No data
lose happen whenever power failure occur during the ROM memory work in
computers.
ROM memory has several models such names are
following.
1. PROM: Programmable Read Only
Memory (PROM) maintains large storage media but can’t offer the erase features
in ROM. This type of RO maintains PROM chips to write data once and read many.
The programs or instructions designed in PROM can’t be erased by other
programs.
2. EPROM : Erasable Programmable
Read Only Memory designed for recover the problems of PROM and ROM. Users can
delete the data of EPROM thorough pass on ultraviolet light and it erases chip
is reprogrammed.
3. EEPROM: Electrically Erasable
Programmable Read Only Memory similar to the EPROM but it uses electrical beam
for erase the data of ROM.
Cache Memory: Mina memory
less than the access time of CPU so, the performance will decrease through less
access time. Speed mismatch will decrease through maintain cache memory. Main
memory can store huge amount of data but the cache memory normally kept small
and low expensive cost. All types of external media like Magnetic disks,
Magnetic drives and etc store in cache memory to provide quick access tools to
the users.
Storage units
Bit
The
smallest unit of data in a computer is called Bit (Binary Digit). A bit has a single binary value, either
0 or 1. In most computer systems, there are eight bits in a byte. The value of
a bit is usually stored as either above or below a designated level of
electrical charge in a single capacitor within a memory device.
Nibble
Half
a byte (four bits) is called a nibble.
Byte
In
most computer systems, a byte is a unit of data that is eight
binary digits long. A byte is the unit most computers use to represent a
character such as a letter, number or typographic symbol (for example, “g”,
“5”, or “?”). A byte can also hold a string of bits that need to be used in
some larger unit of application purposes (for example, the stream of bits that
constitute a visual image for a program that displays images or the string of
bits that constitutes the machine code of a computer program).
In
some computer systems, four bytes constitute a word, a unit that a computer
processor can be designed to handle efficiently as it reads and processes each
instruction. Some computer processors can handle two-byte or single-byte
instructions.
A
byte is abbreviated with a
“B”. (A bit is abbreviated with a small “b”). Computer storage is usually
measured in byte multiples. For example, an 820 MB hard drive holds a nominal
820 million bytes – or megabytes – of data. Byte multiples are based on powers
of 2 and commonly expressed as a “rounded off” decimal number. For example, one
megabyte (“one million bytes”) is actually
1,048,576 (decimal) bytes.
Octet
In
some systems, the term octet is used for an eight-bit unit instead of byte. In
many systems, four eight-bit bytes or octets form a 32-bit word. In such systems, instructions lengths are
sometimes expressed as full-word (32 bits in length) or half-word (16 bits in length).
Kilobyte
A
Kilobyte (kb or Kbyte) is approximately a thousand bytes
(actually, 2 to the 10th power, or decimal 1,024 bytes).
Megabyte
As
a measure of computer processor storage and real and virtual memory, a megabyte
(abbreviated MB) is 2 to the 20th power byte, or 1,048,576 bytes in
decimal notation.
Gigabyte
A
Gigabyte (pronounced Gig-a-bite with hard G’s) is a measure of
computer data storage capacity and is “roughly” a billion bytes. A gigabyte is two to the 30th
power, or 1,073,741,824 in decimal notation.
Terabyte
A
Terabyte is a measure of computer storage capacity and is 2 to
the 40th power of 1024 gigabytes.
Petabyte
A
Petabyte (PB) is a measure of memory or storage capacity and
is 2 to the 50th power bytes or, in decimal, approximately a
thousand terabytes (1024 terabytes).
Exabyte
An
Exabyte (EB) is a large unit of computer data storage, two to
the sixtieth power bytes. The prefix exa means one billion billion, or on
quintillion, which is a decimal term. Two to the sixtieth power is actually
1,152,921,504,606,846,976 bytes in decimal, or somewhat over a quintillion (or
ten to the eighteenth power) bytes. It is common to say that an Exabyte is
approximately one quintillion bytes. In decimal terms, an Exabyte is a billion
gigabytes.
Zettabyte
A
Zettabyte (ZB) is equal to one sextillion bytes. It is
commonly abbreviated ZB. At this time, no computer has one Zettabyte of
storage. It has 1024 Exabytes.
Yottabyte
A Yottabyte is equal to one
septillion bytes. It is commonly abbreviated YB. At this time, no computer has
one Zettabyte of storage. It has 1024 Zettabytes.
Computers Networks
Computer Networks is an
international, archival journal providing a publication vehicle for complete
coverage of all topics of interest to those involved in the computer
communications networking area. The audience includes researchers, managers
and operators of networks as well as designers and implementors. The Editorial
Board will consider any material for publication that is of interest
to those groups.
SUBJECT COVERAGE
The topics covered by the journal but not limited
to these are:
1. Communication Network Architectures:
New design contributions on Local Area Networks (LANs), Metropolitan Area Networks (MANs), Wide Area Networks (WANs) including Wired, Wireless, Mobile, Cellular, Sensor, Optical, IP, ATM, and other related network technologies, as well as new switching technologies and the integration of various networking paradigms.
New design contributions on Local Area Networks (LANs), Metropolitan Area Networks (MANs), Wide Area Networks (WANs) including Wired, Wireless, Mobile, Cellular, Sensor, Optical, IP, ATM, and other related network technologies, as well as new switching technologies and the integration of various networking paradigms.
2. Communication Network Protocols:
New design contributions on all protocol layers except the Physical Layer, considering all types of networks mentioned above and their performance evaluation; novel protocols, methods and algorithms related to, e.g., medium access control, error control, routing, resource discovery, multicasting, congestion and flow control, scheduling, multimedia quality of service, as well as protocol specification, testing and verification.
New design contributions on all protocol layers except the Physical Layer, considering all types of networks mentioned above and their performance evaluation; novel protocols, methods and algorithms related to, e.g., medium access control, error control, routing, resource discovery, multicasting, congestion and flow control, scheduling, multimedia quality of service, as well as protocol specification, testing and verification.
3. Network Services and Applications:
Web, Web caching, Web performance, Middleware and operating system support for all types of networking, electronic commerce, quality of service, new adaptive applications, and multimedia services.
Web, Web caching, Web performance, Middleware and operating system support for all types of networking, electronic commerce, quality of service, new adaptive applications, and multimedia services.
4. Network Security and Privacy:
Security protocols, authentication, denial of service, anonymity, smartcards, intrusion detection, key management, viruses and other malicious codes, information flow, data integrity, mobile code and agent security.
Security protocols, authentication, denial of service, anonymity, smartcards, intrusion detection, key management, viruses and other malicious codes, information flow, data integrity, mobile code and agent security.
5. Network Operation and Management:
Including network pricing, network system software, quality of service, signaling protocols, mobility management, power management and power control algorithms, network planning, network dimensioning, network reliability, network performance measurements, network modeling and analysis, and overall system management.
Including network pricing, network system software, quality of service, signaling protocols, mobility management, power management and power control algorithms, network planning, network dimensioning, network reliability, network performance measurements, network modeling and analysis, and overall system management.
6. Discrete Algorithms
and Discrete Modeling
Algorithmic and discrete aspects in the context of computer networking as well as mobile and wireless computing and communications. Fostering cooperation among practitioners and theoreticians in this field.
Algorithmic and discrete aspects in the context of computer networking as well as mobile and wireless computing and communications. Fostering cooperation among practitioners and theoreticians in this field.
Computer - Internet and Intranet
Internet
It is a worldwide system which has the following
characteristics:
·
Internet is a world-wide / global system of
interconnected computer networks.
·
Internet uses the standard Internet Protocol
(TCP/IP)
·
Every computer in internet is identified by a
unique IP address.
·
IP Address is a unique set of numbers (such as
110.22.33.114) which identifies a computer’s location.
·
A special computer DNS (Domain Name Server) is
used to give name to the IP Address so that user can locate a computer by a
name.
·
For example, a DNS server will resolve a name http://stupoint.blogspot.in
to a particular IP address to uniquely identify the computer on which this
website is hosted.
·
Internet is accessible to every user all over
the world.
Intranet
·
Intranet is system in which multiple PCs are
connected to each other.
·
PCs in intranet are not available to the world
outside the intranet.
·
Usually each company or organization has their
own Intranet network and members/employees of that company can access the
computers in their intranet.
·
Each computer in Intranet is also identified by
an IP Address which is unique among the computers in that Intranet.
Similarities in Internet and Intranet
·
Intranet uses the internet protocols such as
TCP/IP and FTP.
·
Intranet sites are accessible via web browser in
similar way as websites in internet. But only members of Intranet network can
access intranet hosted sites.
·
In Intranet, own instant messengers can be used
as similar to yahoo messenger/ gtalk over the internet.
Differences in Internet and Intranet
·
Internet is general to PCs all over the world
whereas Intranet is specific to few PCs.
·
Internet has wider access and provides a better
access to websites to large population whereas Intranet is restricted.
·
Internet is not as safe as Intranet as Intranet
can be safely privatized as per the need.
OSI MODEL
OSI (Open Systems Interconnection)
is reference model for how applications can communicate over a network. A
reference model is a conceptual framework for understanding relationships. The
purpose of the OSI reference model is to guide vendors and developers so the
digital communication products and software programs they create will interoperate,
and to facilitate clear comparisons among communications tools. Most vendors
involved in telecommunications make an attempt to describe their products and
services in relation to the OSI model. And although useful for guiding
discussion and evaluation, OSI is rarely actually implemented, as few network
products or standard tools keep all related functions together in well-defined
layers as related to the model. The TCP/IP protocols, which define the Internet,
do not map cleanly to the OSI model.
Developed by representatives of major computer and
telecommunication companies beginning in 1983, OSI was originally intended to
be a detailed specification of actual interfaces.
Instead, the committee decided to establish a common reference model for which
others could then develop detailed interfaces, which in turn could become standards.
OSI was officially adopted as an international standard by the International
Organization of Standards (ISO).
OSI layers
The main concept of OSI is that the process of
communication between two endpoints in a telecommunication network can be
divided into seven distinct groups of related functions, or layers. Each
communicating user or program is at a computer that can provide those seven
layers of function. So in a given message between users, there will be a flow
of data down through the layers in the source computer, across the network and
then up through the layers in the receiving computer. The seven layers of
function are provided by a combination of applications, operating
systems, network card device drivers and networking hardware that
enable a system to put a signal on a network cable or out over Wi-Fi
or other wireless
protocol).
The seven Open Systems Interconnection layers are:
Layer
7: The application layer. This is the layer at which communication
partners are identified (Is there someone to talk to?), network capacity is
assessed (Will the network let me talk to them right now?), and that creates a
thing to send or opens the thing received. (This layer is not the application
itself, it is the set of services an application should be able to make use of
directly, although some applications may perform application layer functions.)
Layer
6: The presentation layer. This layer is usually part of an
operating system (OS)
and converts incoming and outgoing data
from one presentation format
to another (for example, from clear text to encrypted text at one end and back
to clear text at the other).
Layer
5: The session layer. This layer sets up, coordinates and terminates
conversations. Services include authentication and reconnection after an
interruption. On the Internet, Transmission
Control Protocol (TCP) and User Datagram Protocol
(UDP) provide these services for most applications.
Layer
4: The transport layer. This layer manages packetization of data,
then the delivery of the packets, including checking for errors in the data
once it arrives. On the Internet, TCP and UDP provide these services for most
applications as well.
Layer
3: The network layer. This layer handles the addressing and routing
of the data (sending it in the right direction to the right destination on
outgoing transmissions and receiving incoming transmissions at the packet
level). IP is the network layer for the Internet.
Layer
2: The data-link layer. This layer sets up links across the physical
network, putting packets into network frames.
This layer has two sub-layers, the Logical
Link Control Layer and the Media
Access Control Layer. Ethernet is the main data link layer in use.
Layer
1: The physical layer. This layer conveys the bit
stream through the network at the electrical, optical or radio
level. It provides the hardware
means of sending and receiving data on a carrier
network.
Internet Protocol
The Internet Protocol (IP) is the
method or protocol
by which data
is sent from one computer to another on the Internet.
Each computer (known as a host) on
the Internet has at least one IP
address that uniquely identifies it from all other computers on the
Internet.
When you send or receive data (for
example, an e-mail note or a Web page), the message gets divided into little
chunks called packets. Each of these packets contains both the sender's
Internet address and the receiver's address. Any packet is
sent first to a gateway
computer that understands a small part of the Internet. The gateway computer
reads the destination address and forwards the packet to an adjacent gateway
that in turn reads the destination address and so forth across the Internet
until one gateway recognizes the packet as belonging to a computer within its
immediate neighborhood or domain. That
gateway then forwards the packet directly to the computer whose address is
specified.
Because a message is divided into
a number of packets, each packet can, if necessary, be sent by a different
route across the Internet. Packets can arrive in a different order than the
order they were sent in. The Internet Protocol just delivers them. It's up to
another protocol, the Transmission Control Protocol (TCP) to put
them back in the right order.
IP is a connectionless protocol, which means that
there is no continuing connection between the end points that are
communicating. Each packet that travels through the Internet is treated as an
independent unit of data without any relation to any other unit of data. (The
reason the packets do get put in the right order is because of TCP, the
connection-oriented protocol that keeps track of the packet sequence in a
message.) In the Open Systems Interconnection (OSI)
communication model, IP is in layer
3, the Networking Layer.
The most widely used version of IP today is
Internet Protocol Version 4 (IPv4). However, IP Version 6 (IPv6)
is also beginning to be supported. IPv6 provides for much longer addresses and
therefore for the possibility of many more Internet users. IPv6 includes the
capabilities of IPv4 and any server that can support IPv6 packets can also
support IPv4 packets.
No comments:
Post a Comment