Questions related to Operating System
Q1. What are the basic functions of an operating system?
Basic functions of an operating system
Definition
An operating system is a group of computer programs that coordinates all the activities among computer hardware devices. It is the first program loaded into the computer by a boot program and remains in memory at all times.
Functions of an operating system
The basic functions of an operating system are:
Booting the computer
Performs basic computer tasks eg managing the various peripheral devices eg mouse, keyboard
Provides a user interface, e.g. command line, graphical user interface (GUI)
Handles system resources such as computer’s memory and sharing of the central processing unit (CPU) time by various applications or peripheral devices
Provides file management which refers to the way that the operating system manipulates, stores, retrieves and saves data.
Booting the computer
The process of starting or restarting the computer is known as booting. A cold boot is when you turn on a computer that has been turned off completely. A warm boot is the process of using the operating system to restart the computer.
Performs basic computer tasks
The operating system performs basic computer tasks, such as managing the various peripheral devices such as the mouse, keyboard and printers. For example, most operating systems now are plug and play which means a device such as a printer will automatically be detected and configured without any user intervention.
The Essay on Complete Computer System
The new Power Mac G4 is up to twice as fast as the fastest Pentium III-based PCs. With its Pentium-crushing speed and new design, the Power Mac G4 picks up where the old Macintosh (G3) left off. Its enclosure is now highly polished silver and graphite, yet it still offers easy access to every internal component through its swing-open side door. With PowerPC G4 with Velocity Engine, the computer ...
Provides a user interface
A user interacts with software through the user interface. The two main types of user interfaces are: command line and a graphical user interface (GUI).
With a command line interface, the user interacts with the operating system by typing commands to perform specific tasks. An example of a command line interface is DOS (disk operating system).
With a graphical user interface, the user interacts with the operating system by using a mouse to access windows, icons, and menus. An example of a graphical user interface is Windows Vista or Windows 7.
The operating system is responsible for providing a consistent application program interface (API) which is important as it allows a software developer to write an application on one computer and know that it will run on another computer of the same type even if the amount of memory or amount of storage is different on the two machines.
Handles system resources
The operating system also handles system resources such as the computer’s memory and sharing of the central processing unit (CPU) time by various applications or peripheral devices. Programs and input methods are constantly competing for the attention of the CPU and demand memory, storage and input/output bandwidth. The operating system ensures that each application gets the necessary resources it needs in order to maximise the functionality of the overall system.
Provides file management
The operating system also handles the organisation and tracking of files and directories (folders) saved or retrieved from a computer disk. The file management system allows the user to perform such tasks as creating files and directories, renaming files, coping and moving files, and deleting files. The operating system keeps track of where files are located on the hard drive through the type of file system. The type two main types of file system are File Allocation table (FAT) or New Technology File system (NTFS).
Types of file system
File Allocation table (FAT)
New Technology file system (NTFS)
File Allocation table (FAT) uses the file allocation table which records, which clusters are used and unused and where files are located within the clusters.
The Term Paper on Alternatives Of Network File System
... Network File System (NFS) is a popular network operating system and its a distributed file system that allows users to access files and directories located on remote Computers and ... 1 and Windows for Workgroups 3. 11. Your users retain their familiar Microsoft Windows interface and continue to work with point and ...
NTFS is a file system introduced by Microsoft and it has a number of advantages over the previous file system, named FAT32 (File Allocation Table).
One major advantage of NTFS is that it includes features to improve reliablity. For example, the new technology file system includes fault tolerance, which automatically repairs hard drive errors without displaying error messages. It also keeps detailed transaction logs, which tracks hard drive errors. This can help prevent hard disk failures and makes it possible to recover files if the hard drive does fail.
NTFS also allows permissions (such as read, write, and execute) to be set for individual directories and files.
———————————–
Q2. Explain briefly about, processor, assembler, compiler, loader, linker and the functions executed by them.
Answers:
Processor:–A processor is the part a computer system that executes instructions .It is also called a CPU
Assembler: — An assembler is a program that takes basic computer instructions and converts them into a pattern of bits that the computer’s processor can use to perform its basic operations. Some people call these instructions assembler language and others use the term assembly language.
Compiler: — A compiler is a special program that processes statements written in a particular programming language and turns them into machine language or “code” that
a computer’s processor uses. Typically, a programmer writes language statements in a language such as Pascal or C one line at a time using an editor. The file that is created contains what are called the source statements. The programmer then runs the appropriate language compiler, specifying the name of the file that contains the
source statements.
Loader:–In a computer operating system, a loader is a component that locates a given program (which can be an application or, in some cases, part of the operating system itself) in offline storage (such as a hard disk), loads it into main storage (in a personal computer, it’s called random access memory), and gives that program control
of the compute
Linker: — Linker performs the linking of libraries with the object code to make the object code into an executable machine code.
The Term Paper on Computer Systems
INSTALLATION AND APPLICATION OF CMMS INSTALLATION AND APPLICATION OF CMMS Maintenance professionals are presented with more difficult challenges today than ever before. Maintenance departments must deliver superior service, comply with regulatory requirements and standards, and provide detailed financial accountably all within the confines of limited budgets. In order to meet these challenges, ...
What are the difference phases of software development? Explain briefly?
Answer:
The Software development process ..
The Software development process is composed of many activities, notably the following.
* Conception: Establishing a case model for the development of the software. * Requirements Analysis: Extracting the requirements of a desired software product is the first task in creating it. While customers probably believe they know what the software is to do, it is important to recognize incomplete, ambiguous or contradictory requirements. * Specification: Specification is the task of precisely describing the software to be written. What functions it should have, the user interface, and any other requirements. * Software architecture: The architecture of a software system refers to an abstract representation of that system. Architecture involves making sure the software system will meet the requirements of the product, as well as ensuring that future requirements can be addressed. The architecture step also addresses interfaces between the software system and other software products, as well as the underlying hardware and the operating system platform. * Implementation: Converting a design to functional code. * Testing: Testing all aspects of the software’s functionality. * Documentation: An important task is documenting the internal design of software for the purpose of future maintenance and enhancement. * Software Training and Support: Users will have lots of questions and software problems which leads to the next phase of software. * Maintenance: Maintaining and enhancing software to cope with newly discovered problems or new requirements.
—————————————————-
Differentiate between RAM and ROM?
Answer
Random Access Memory is a form of memory that can be read without having to begin at the first address, then the second address, then the third, and so on. This is a carryover from when most memory was Serial Access, such as magnetic tape, paper tape, or punched cards. The on-board memory computers use for temporary storage is Random Access Memory, but on chips, instead of a hard disc drive.
The Essay on Determining Operating Systems and Software Applications
Determining Operating Systems and Software Applications Business Information Systems BIS320 Determining Operating Systems and Software Applications Since 1955 the McDonald’s Corporation has become the largest and best known fast food franchise in the world. Employing over 1.8 million employees, with over 34,000 restaurants and 5000 franchises worldwide, McDonald’s Corporation serves over 69 ...
Read Only Memory is exactly what the name implies, it can only be read, not written to. A CD-ROM is a form of memory, as is a chip on the motherboard which is used to store instructions for the Central Processor Unit. RAM is your computer’s temporary storage space. RAM is really the computer’s short-term memory. As such, it remembers everything that the computer needs to process the data into information, such as data that has been entered and software instructions, but only when the computer is on. This means that RAM is an example of volatile storage. When the power is off, the data stored in RAM is cleared out. This is why, in addition to RAM, systems always include nonvolatile storage devices for permanent storage of instructions and data when the computer is powered off. Hard disks provide the greatest nonvolatile storage capacity in the computer system.
——————————-
What is DRAM? In which form does it store data?
DRAM or Dynamic Random Access memory stores data in the form of 0’s and 1’s. The processor can access any part of memory randomly. It is called dynamic because it is refreshed every few milliseconds unlike RAM which is refreshed only when the POWER is OFF.
———————————–
What is cache memory?
Cache memory is random access memory (RAM) that a computer microprocessor can access more quickly than it can access regular RAM. As the microprocessor processes data, it looks first in the cache memory and if it finds the data there (from a previous reading of data), it does not have to do the more time-consuming reading of data from larger memory.
Cache memory is sometimes described in levels of closeness and accessibility to the microprocessor. An L1 cache is on the same chip as the microprocessor. (For example, the PowerPC 601 processor has a 32 kilobyte level-1 cache built into its chip.) L2 is usually a separate static RAM (SRAM) chip. The main RAM is usually a dynamic RAM (DRAM) chip.
In addition to cache memory, one can think of RAM itself as a cache of memory for hard disk storage since all of RAM’s contents come from the hard disk initially when you turn your computer on and load the operating system (you are loading it into RAM) and later as you start new applications and access new data. RAM can also contain a special area called a disk cache that contains the data most recently read in from the hard disk.
The Term Paper on Types Of Prom Memory Data Dram
... but RAM cannot. When RAM loses power, the signals that represent the data are lost.The first time memory comes into play in the system ... graphic accelerator, the accelerator gathers and processes the data. A small portion of video memory is allocated to assist the accelerator ... developed. PROM chips were not inscribed with data during the manufacturing process. These blank chips were sold to computer ...
———————————————–
Differentiate between Compiler and Interpreter?
Answer:
A compiler translates code from a source language (e.g. C, C++, Java) to a target language, which can then be executed by a (virtual or physical) machine.
An interpreter reads code in an interpreted language (e.g. PHP, Perl, JavaScript) and directly executes the contained instructions.
———————————
What are the different tasks of Lexical analysis?
Answers:
The purpose of the lexical analyzer is to partition the input text, delivering a sequence of comments and basic symbols. Comments are character sequences to be ignored, while basic symbols are character sequences that correspond to terminal symbols of the grammar defining the phrase structure of the input
———————————–
What are the different functions of Syntax phase, Scheduler?
Syntax phase sheduler deals with the problem of deciding
which of the process in ready queue is to allocate the CPU
time for processing.
——————————–
What are the main difference between Micro-Controller and Micro- Processor?
microprocessor:
1 general purpose digital computer
2 many operational codes for moving data from external
memory to cpu
3 rapid movement of code and data from externel address to
chip
microcontroller
1 specific purpose digital computer
2 one or two codes for moving data from external memory to
cpu
3 rapid movements of bits within the chip
———————————-
Describe different job scheduling in operating systems.
The Term Paper on INFORMATION SYSTEMS DEVELOPMENT PROCESS
INTRODUCTION In recent years, there has been an abundance of new technologies in the information systems field. These new technologies have altered the very development process itself. Information systems have gone from being a series of one level databases to three dimensional reality, virtual reality, and multimedia systems. In the early days of information systems, the demands were for ...
Answers:
Scheduling is the activity of the deciding when process will receive the resources they request.
FCFS: — FCSFS stands for First Come First Served. In FCFS the job that has been waiting the longest is served next.
Round Robin Scheduling: —Round Robin scheduling is a scheduling method where each process gets a small quantity of time to run and then it is preempted and the next process gets to run. This is called time-sharing and gives the effect of all the processes running at the same time
Shortest Job First: — The Shortest job First scheduling algorithm is a nonpreemptive scheduling algorithm that chooses the job that will execute the shortest amount of time.
Priority Scheduling: —Priority scheduling is a scheduling method where at all times the highest priority process is assigned the resource.
——————————
What is a Real-Time System ?
What is a real time system?
Definition 1:
A real-time system is one that must process information and produce a response within a specified time, else risk severe consequences, including failure. That is, in a system with a real-time constraint it is no good to have the correct action or the correct answer after a certain deadline: it is either by the deadline or it is useless!
Definition 2 (modified from The Oxford Dictionary of Computing):
Any system in which the time at which output is produced is significant. This is usually because the input corresponds to some event in the physical world, and the output has to relate to that same event. The lag from input time to output time must be sufficiently small for acceptable timeliness
Definition 3 (Young 1982):
Any information processing activity or system which has to respond to externally generated input stimuli within a finite and specified period.
Examples:
ABS, aircraft control, ticket reservation system at airport, over-temperature monitor in nuclear power station, mobile phone, oven temperature controller, Doppler blood-flow monitor, ECG/arrhythmia monitor.
Failure is inability to perform according to specification. In the case of real-time systems the ‘failed’ specification may be lack of correctness or the failure to produce the response by the required time.
A real-time system is one whose correctness is based on both the correctness of the outputs and their timeliness. The ‘novelty’ here is that the system is time critical.
NB: It does not have to be ‘fast’: the deadline may be days or weeks… For instance, Texas Instruments European Programme Information Centre (epic) responds within 48 hours for literature requests.
————————————
What is the difference between Hard and Soft real-time systems ?
A hard real-time system guarantees that critical tasks complete on time. This goal requires that all delays in the system be bounded from the retrieval of the stored data to the time that it takes the operating system to finish any request made of it. A soft real time system where a critical real-time task gets priority over other tasks and retains that priority until it completes. As in hard real time systems kernel delays need to be bounded
—————————
What is a mission critical system ?
Besides its life/death or make/break ingredients of
operational procedures or processes, that is what the most
mission critical systems have themselves labeled, a mission
critical system is the system that has to acomplish certain
critical tasks within a predefined duration of time and/or a
predefined frame of benchmarks. The time constraint and the
benchmarks are the integral part of the mission. Without the
meeting such those constraints, the consequence of failure
often thoroughly defeats the purpose of the entire system.
—————————-
If two processes which shares same system memory and system clock in a distributed system, What is it called?
Symmetric Multiprocessing (SMP)
—————————-
What is the state of the processor, when a process is waiting for some event to occur?
Waiting state
or
Ready or waiting
A “ready” or “waiting” process has been loaded into main memory and is awaiting execution on a CPU (to be context switched onto the CPU by the dispatcher, or short-term scheduler).
There may be many “ready” processes at any one point of the system’s execution—for example, in a one-processor system, only one process can be executing at any one time, and all other “concurrently executing” processes will be waiting for execution.
A ready queue or run queue is used in computer scheduling. Modern computers are capable of running many different programs or processes at the same time. However, the CPU is only capable of handling one process at a time. Processes that are ready for the CPU are kept in a queue for “ready” processes. Other processes that are waiting for an event to occur, such as loading information from a hard drive or waiting on an internet connection, are not in the ready queue.
[edit]Running
A process moves into the running state when it is chosen for execution. The process’s instructions are executed by one of the CPUs (or cores) of the system. There is at most one running process per CPU or core.
[edit]Blocked
A process that is blocked on some event (such as I/O operation completion or a signal).
A process may be blocked due to various reasons such as:when a particular process has exhausted the CPU(Central Processing Unit)time allocated to it or it is waiting for an event to occur.
—————————–
What do you mean by deadlock?
Answers:
Deadlock is a situation where a group of processes are all blocked and none of them can become unblocked until one of the other becomes unblocked.
The simplest deadlock is two processes each of which is waiting for a message from the other.
or
When two or more processes wait for a resource to be freed which is controlled by another process, a deadlock situation occurs. It usually occurs in a multiprocessing environment. Example, one client application needs access to a table to acquire a lock which is held by another client application that in turn may attempt to obtain the lock that is held by the first application.
————————————
Explain the difference between microkernel and macro kernel.
nswer:
Micro-Kernel: A micro-kernel is a minimal operating system that performs only the essential functions of an operating system. All other operating system functions are performed by system processes.
Monolithic: A monolithic operating system is one where all operating system code is in a single executable image and all operating system code runs in system mode
———–
Give an example of microkernel.
Amoeba, WinNT, Minix
——————
When would you choose bottom up methodology?
Top down approach requires you to write your own WSDL and XSDs and then create service which follows that WSDL and XSDs. In bottom up approach you first write service code and then WSDL and XSDs. In MS world bottom up is very easy because WSDL and XSDs are generated for you by tools. The first approach is much harder because it has lesser support in tools and because developers are usually not skilled enough to write WSDL (especially with WS-Policies).
————————-
When would you choose top down methodology?
Top down methodology is one where overall picture of the
system is defined first. Then we go into the details of the
sub-systems in the next level and so on. So to brief this,
first the complex system is seen as a whole and then this
is broken down to smaller simpler pieces.
Top down approach is preferred when understanding of the
complete system is done. Sufficient knowledge of the system
is required.
—————-
Why paging is used ?
wer:
Paging is solution to external fragmentation problem which
is to permit the logical address space of a process to be
noncontiguous, thus allowing a process to be allocating
physical memory wherever the latter is available.
—————————
Which is the best page replacement algorithm and Why? How much time is spent usually in each phases and why?
1. It should be good in predicting which page wouldn’t be needed for the maximum amount of time, so that it can be selected as the victim and swapped onto the disk.
2. It should use minimal resources (in terms of computation and memory) to identify the victim.
There are a number of algorithms suggested including Random, Least recently used, First chance algorithm etc. First chance looks to be the best algorithm according to me.
——————————
Difference between Primary storage and secondary storage?
1. All secondary storage is permanent
2. Most Primary Storage is temporary.
3. Secondary storage is usualy cheaper and large
4. Primary Storage is expensive and smaller.
5. Primary storage is smaller in capacity (L2 Cashe is only 2MB)
6. Secondary Storage is bigger in capacity
7. Primary storage is usually faster therefore more expensive
8. Seconday storage connects to the CPU via cables and therefore is slower
9. Primary storage is closer to the CPU and/or is intergrated onto it and thererfore is faster.
10. Examples of Primary storage: RAM ,L2 Cache..
Seconday Storage; Hard drive, CD-ROM..
or
OS – Difference between Primary storage and secondary storage – Feb 27, 2010, 11:35 am by Rajmeet Ghai
Difference between Primary storage and secondary storage
Primary memory is the main memory (Hard disk, RAM) where the operating system resides.
Secondary memory can be external devices like CD, floppy magnetic discs etc. secondary storage cannot be directly accessed by the CPU and is also external memory storage.
OS – Difference between Primary storage and secondary storage – September 30, 2009 at 15:50 AM by Vidya Sagar
Difference between Primary storage and secondary storage
– Primary memory storages are temporary; where as the secondary storage is permanent.
– Primary memory is expensive and smaller, where as secondary memory is cheaper and larger
– Primary memory storages are faster, where as secondary storages are slower.
– Primary memory storages are connected through data buses to CPU, where as the secondary storages are connect through data cables to CPU
—————————————–
What is multi-tasking, multi-programming, multi threading?
Multiprogramming is a rudimentary form of parallel processing in which several programs are run at the same time on a uniprocessor.Since there is only one processor, there can be no true simultaneous execution of different programs. Instead, the operating system executes part of one program, then part of another, and so on. To the user it appears that all programs are executing at the same time.
Multitasking, in an operating system, is allowing a user to perform more than one computer task (such as the operation of an application program) at a time. The operating system is able to keep track of where you are in these tasks and go from one to the other without losing information
Multithreading is the ability of a program or an operating system process to manage its use by more than one user at a time and to even manage multiple requests by the same user without having to have multiple copies of the program running in the computer
——————————————
Explain about paged segmentation and segment paging
PAGING: Paging memory allocation algorithms divide computer
memory into small partitions, and allocates memory using a
page as the smallest building block.
SEGMENTAION: Segmentation means that a part or parts of the
memory will be sealed off from the currently running
process, through the use of hardware registers. If the data
that is about to be read or written to is outside the
permitted address space of that process, a segmentation
fault will result.
———————-
While running DOS on a PC, which command would be used to duplicate the entire diskette?
Diskcopy
The command will be C:>diskcopy A: A:
Put the source disk in the drive. Then give the command. After a while you will be asked to insert the target disk. Then bring out the source disk and insert the target disk and press Enter.
——————————————————
Questions related to NETWORKING
1. User(s) are complaining of delays when using the network. What would you do?
There are lot of concerns that cause network delays, The Administrator / Network Enginrer must check the following simple tasks before he / she proceed.
· get the user(s) to demonstrate the problem.
· determine how many other users are affected.
· ensure desktop hardware and configuration is OK.
· trace all connections (they may be on another subnet).
· commence some monitoring or diagnostics.
· determine problem (if there is one).
· check for the virus / spyware activity.
· If everything seems OK, then try to restart the router / switch, same time try to restart the system.
———————————
2. What are some of the problems associated with operating a switched LAN?
What are some of the problems associated with operating a switched LAN?
There are not really any problems… It does exactly what it says on the tin.
It switches packets between ports on the switch. Thats all it does. It does it well. And without problems.
I fail to see what they are trying to get it with this question.
As noted above it has less features than a router.. The main one being. It doesnt ROUTE! but thats exactly what its meant to do. So this isnt a problem..
A router is only used when connecting 2 or more lans together. This is why you have one to connect to the internet ( A separate network )
—————————–
4. What sort of cabling is suitable for Fast Ethernet protocols?
for ethernet:
Specification Cable Type Maximum length
10BaseT Unshielded Twisted Pair 100 meters
10Base2 Thin Coaxial 185 meters
10Base5 Thick Coaxial 500 meters
10BaseF Fiber Optic 2000 meters
Fast Ethernet: 100Mbps Ethernet (IEEE 802.3u)
Fast Ethernet (100BASE-T) offers a speed increase ten times that of the 10BaseT Ethernet specification, while preserving such qualities as frame format, MAC mechanisms, and MTU. Such similarities allow the use of existing 10BaseT applications and network management tools on Fast Ethernet networks. Officially, the 100BASE-T standard is IEEE 802.3u.
Like Ethernet, 100BASE-T is based on the CSMA/CD LAN access method. There are several different cabling schemes that can be used with 100BASE-T, including:
100BASE-TX: two pairs of high-quality twisted-pair wires
100BASE-T4: four pairs of normal-quality twisted-pair wires
100BASE-FX: fiber optic cables
——————————————
5. What is a Class D IP address?
Class D 224.0.0.0 to 239.255.255.255
—————————-
7. What is Firewall?
A firewall is like locking the front door to your house—it helps keep intruders (in this case, hackers and malicious software) from getting in. Windows Firewall is on by default in Windows 7, so you don’t need to set it up—plus we’ve made it more flexible and easier to use.
Now you can fine-tune the protection and notifications you want for each of your network profiles—Home, Work, and Public. When you’re connected to a public network like a library or a coffee shop, you may want to block all incoming connections. At home or work, this might be overkill. Whatever level of protection you choose for your profiles, you’ll be able to switch between them with ease.
————————–
11. What is the role of TCP protocol and IP protocol.
TCP and IP are two of the most important communication
protocol in the suit called Internet protocol suit.
TCP and IP are software components part of operating
system’s kernel.
It is responsible:
• A connection oriented protocol which is responsible
establishing a connection before applications which use TCP
starts exchange data
• Disassemble the actual data (pay load) in to packet
of fixed size bytes from the source computer or node.
• Assemble received packets back to the actual data
(its original form).
• Sending /receiving packets In order (FIFO),no
missing packet and no duplicate packets If packet is
missed TCP is responsible to resend
• If packet is missed ,TCP is responsible to resend
(Data can be lost in intermediate network )
• If packet is duplicated, TCP is responsible to
truncate
• Handle error.
In brief, TCP is a connection oriented (setting up and
maintaining reliable connection) protocol responsible for
correct delivery of data .
IP is responsible for moving (routing) packet of data from
node to node using four byte destination address.
—————————–
12. What is UDP?
The User Datagram Protocol (UDP) is one of the core members of the Internet protocol suite, the set of network protocols used for the Internet. With UDP, computer applications can send messages, in this case referred to as datagrams, to other hosts on an Internet Protocol (IP) network without requiring prior communications to set up special transmission channels or data paths. The protocol was designed by David P. Reed in 1980 and formally defined in RFC 768.
UDP uses a simple transmission model with a minimum of protocol mechanism.[1]. It has no handshaking dialogues, and thus exposes any unreliability of the underlying network protocol to the user’s program. As this is normally IP over unreliable media, there is no guarantee of delivery, ordering or duplicate protection. UDP provides checksums for data integrity, and port numbers for addressing different functions at the source and destination of the datagram.
UDP is suitable for purposes where error checking and correction is either not necessary or performed in the application, avoiding the overhead of such processing at the network interface level. Time-sensitive applications often use UDP because dropping packets is preferable to waiting for delayed packets, which may not be an option in a real-time system.[2] If error correction facilities are needed at the network interface level, an application may use the Transmission Control Protocol (TCP) or Stream Control Transmission Protocol (SCTP) which are designed for this purpose.
A number of UDP’s attributes make it especially suited for certain applications.
————————————–
14. How should I choose a port number for my server?
Typically higher port number are not reserved so you’re better using those. If you’re developing a software tht is going to be used by many people, you can offically register to have your port reserved which means that if some other software manufacturer uses that port and conflicts with your software you can tell the user that your port is officially registered and that the other software developer should not be using that port.
Low port numbers such as 22, 23, 660 are generally reserved
——————————–
15. Layers in TCP/IP
The TCP/IP model comprises four layers: the Application layer, the Host-to-Host layer or Transport layer, the Internet layer, and the Network Access layer. These layers and their corresponding functions and protocols are summarized in Table 15-3.
As with the OSI model, encapsulation occurs as data traverses the layers from the Application layer to the Network Access layer at the transmitting node. This process is reversed in the receiving node. Encapsulation in TCP/IP is illustrated in Figure 15-1.
———————————-
16. How can I be sure that a UDP message is received?
If you use UDP the application layer has to make sure the message was successfully received.
TCP Versus UDP
Since their inception, the TCP and UDP protocols have taken very different evolutionary paths. TCP is based on connections, maintains a session, and guarantees a degree of reliability and standardized flow control. UDP provides no such features and relies upon the application layer to provide such services.
UDP allows you to send a packet, with or without a checksum, and redirects the packet (multiplexes it) to a listening application based on the port number the packet was sent to. A single packet can be sent to multiple machines at once by using multicast and broadcast transmission.
With UDP, no connection is maintained—each packet sent is treated independently. UDP makes no attempt to control transmission speeds based on congestion. If a packet is lost, the application must detect and remedy the situation. If a packet arrives out of order, you’re on your own again.
UDP also does not provide the security features that you can expect to find with TCP. This includes the three-way handshake that TCP uses to ensure the validity of the claimed source address.
So what can UDP do for your application that TCP cannot? To start, UDP supports multicast—sending a single packet to multiple machines. Multicast can be used to send a message even when you do not know the IP address or DNS name of the target machines. This is used, for example, to discover the servers within a local network. Using multicast can also save bandwidth. If you want to play a presentation to 20 local machines, by using multicast the server would only need to send each packet once.
——————————-
19. How many bytes in an IPX network address?
10 bytes – 4 for the network, 6 for the MAC address.
————————–
27. Can I connect two computers to internet using same line ?
when we connect two computers this is called peer to peer networking. This networking allow sharing of all resources and internet
———————–
28 Q: What is TCP? What’s the difference between TCP and UDP?
A. Transmission Control Protocol (TCP) and User Datagram Protocol (UDP)is a transportation protocol that is one of the core protocols of the Internet protocol suite. Both TCP and UDP work at transport layer TCP/IP model and both have very different usage.
Difference between TCP and UDP
TCP UDP
Reliability: TCP is connection-oriented protocol. When a file or message send it will get delivered unless connections fails. If connection lost, the server will request the lost part. There is no corruption while transferring a message. Reliability: UDP is connectionless protocol. When you a send a data or message, you don’t know if it’ll get there, it could get lost on the way. There may be corruption while transferring a message.
Ordered: If you send two messages along a connection, one after the other, you know the first message will get there first. You don’t have to worry about data arriving in the wrong order. Ordered: If you send two messages out, you don’t know what order they’ll arrive in i.e. no ordered
Heavyweight: – when the low level parts of the TCP “stream” arrive in the wrong order, resend requests have to be sent, and all the out of sequence parts have to be put back together, so requires a bit of work to piece together. Lightweight: No ordering of messages, no tracking connections, etc. It’s just fire and forget! This means it’s a lot quicker, and the network card / OS have to do very little work to translate the data back from the packets.
Streaming: Data is read as a “stream,” with nothing distinguishing where one packet ends and another begins. There may be multiple packets per read call. Datagrams: Packets are sent individually and are guaranteed to be whole if they arrive. One packet per one read call.
Examples: World Wide Web (Apache TCP port 80), e-mail (SMTP TCP port 25 Postfix MTA), File Transfer Protocol (FTP port 21) and Secure Shell (OpenSSH port 22) etc. Examples: Domain Name System (DNS UDP port 53), streaming media applications such as IPTV or movies, Voice over IP (VoIP), Trivial File Transfer Protocol (TFTP) and online multiplayer games etc
—————————————–
30 Q: What is HTTP?
Short for HyperText Transfer Protocol, the underlying protocol used by the World Wide Web. HTTP defines how messages are formatted and transmitted, and what actions Web servers and browsers should take in response to various commands. For example, when you enter a URL in your browser, this actually sends an HTTP command to the Web server directing it to fetch and transmit the requested Web page.
The other main standard that controls how the World Wide Web works is HTML, which covers how Web pages are formatted and displayed.
HTTP is called a stateless protocol because each command is executed independently, without any knowledge of the commands that came before it. This is the main reason that it is difficult to implement Web sites that react intelligently to user input. This shortcoming of HTTP is being addressed in a number of new technologies, including ActiveX, Java, JavaScript and cookies.
——————————————
32 Q: Explain the OSI 7 layer model.
The OSI, or Open System Interconnection, model defines a networking framework for implementing protocols in seven layers. Control is passed from one layer to the next, starting at the application layer in one station, proceeding to the bottom layer, over the channel to the next station and back up the hierarchy.
The OSI, or Open System Interconnection, model defines a networking framework for implementing protocols in seven layers. Control is passed from one layer to the next, starting at the application layer in one station, and proceeding to the bottom layer, over the channel to the next station and back up the hierarchy.
Application (Layer 7)
Sponsored
Server virtualization is growing in popularity, but the technology for securing it lags.: To protect your virtual network, read “3 Ways to Secure Your Virtualized Data Center” now.
This layer supports application and end-user processes. Communication partners are identified, quality of service is identified, user authentication and privacy are considered, and any constraints on data syntax are identified. Everything at this layer is application-specific. This layer provides application services for file transfers, e-mail, and other network software services. Telnet and FTP are applications that exist entirely in the application level. Tiered application architectures are part of this layer.
Presentation (Layer 6)
This layer provides independence from differences in data representation (e.g., encryption) by translating from application to network format, and vice versa. The presentation layer works to transform data into the form that the application layer can accept. This layer formats and encrypts data to be sent across a network, providing freedom from compatibility problems. It is sometimes called the syntax layer.
Session (Layer 5)
This layer establishes, manages and terminates connections between applications. The session layer sets up, coordinates, and terminates conversations, exchanges, and dialogues between the applications at each end. It deals with session and connection coordination.
Transport (Layer 4)
This layer provides transparent transfer of data between end systems, or hosts, and is responsible for end-to-end error recovery and flow control. It ensures complete data transfer.
Network (Layer 3)
This layer provides switching and routing technologies, creating logical paths, known as virtual circuits, for transmitting data from node to node. Routing and forwarding are functions of this layer, as well as addressing, internetworking, error handling, congestion control and packet sequencing.
Data Link (Layer 2)
At this layer, data packets are encoded and decoded into bits. It furnishes transmission protocol knowledge and management and handles errors in the physical layer, flow control and frame synchronization. The data link layer is divided into two sub layers: The Media Access Control (MAC) layer and the Logical Link Control (LLC) layer. The MAC sub layer controls how a computer on the network gains access to the data and permission to transmit it. The LLC layer controls frame synchronization, flow control and error checking.
Physical (Layer 1)
This layer conveys the bit stream – electrical impulse, light or radio signal — through the network at the electrical and mechanical level. It provides the hardware means of sending and receiving data on a carrier, including defining cables, cards and physical aspects. Fast Ethernet, RS232, and ATM are protocols with physical layer components.
————————————-
33 Q: How many IP addresses are in a /17 (CIDR notation)? What is the subnet mask for a /17 network? What is the broadcast? How many addresses are usable?
34 Q: How do hostnames get translated into IP addresses? How do IP addresses get translated into MAC addresses?
36 Q: How does a trace route program work?
traceroute is a computer network diagnostic tool for displaying the route (path) and measuring transit delays of packets across an Internet Protocol (IP) network.
traceroute outputs the list of traversed routers in simple text format, together with timing information
Traceroute is available on most operating systems.
On the Apple Mac OS, traceroute is available through opening ‘Network Utilities’ then selecting ‘Traceroute’ tab, or typing the “traceroute” command in the terminal. On Microsoft Windows operating systems it is named tracert. Windows NT-based operating systems also provide PathPing, with similar functionality. Variants with similar functionality are also available, such as tracepath on Linux installations. For Internet Protocol Version 6 (IPv6) the tool sometimes has the name traceroute6.
————————————
Questions related to DATA STRUCTURE
1. What is a data structure?
In computer science, a data structure is a particular way of storing and organizing data in a computer so that it can be used efficiently.[1][2]
Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, B-trees are particularly well-suited for implementation of databases, while compiler implementations usually use hash tables to look up identifiers.
Data structures provide a means to manage huge amounts of data efficiently, such as large databases and internet indexing services. Usually, efficient data structures are a key to designing efficient algorithms. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design.
Basic principles
Data structures are generally based on the ability of a computer to fetch and store data at any place in its memory, specified by an address—a bit string that can be itself stored in memory and manipulated by the program. Thus the record and array data structures are based on computing the addresses of data items with arithmetic operations; while the linked data structures are based on storing addresses of data items within the structure itself. Many data structures use both principles, sometimes combined in non-trivial ways (as in XOR linking)
The implementation of a data structure usually requires writing a set of procedures that create and manipulate instances of that structure. The efficiency of a data structure cannot be analyzed separately from those operations. This observation motivates the theoretical concept of an abstract data type, a data structure that is defined indirectly by the operations that may be performed on it, and the mathematical properties of those operations (including their space and time cost).
————————————–
2. What does abstract data type means?
In computer science, an abstract data type (ADT) is a mathematical model for a certain class of data structures that have similar behavior; or for certain data types of one or more programming languages that have similar semantics. An abstract data type is defined indirectly, only by the operations that may be performed on it and by mathematical constraints on the effects (and possibly cost) of those operations.[1]
For example, an abstract stack data structure could be defined by three operations: push, that inserts some data item onto the structure, pop, that extracts an item from it (with the constraint that each pop always returns the most recently pushed item that has not been popped yet), and peek, that allows data on top of the structure to be examined without removal. When analyzing the efficiency of algorithms that use stacks, one may also specify that all operations take the same time no matter how many items have been pushed into the stack, and that the stack uses a constant amount of storage for each element.
Abstract data types are purely theoretical entities, used (among other things) to simplify the description of abstract algorithms, to classify and evaluate data structures, and to formally describe the type systems of programming languages. However, an ADT may be implemented by specific data types or data structures, in many ways and in many programming languages; or described in a formal specification language. ADTs are often implemented as modules: the module’s interface declares procedures that correspond to the ADT operations, sometimes with comments that describe the constraints. This information hiding strategy allows the implementation of the module to be changed without disturbing the client programs.
————————–
3. Evaluate the following prefix expression ” ++ 26 + – 1324″ (Similar types can be asked)
++26+-1324
+(2+6)+(1-3)24
+(2+6)((1-3)+(24))
(2+6)+((1-3)+(24))
8+(-2+24)
30
————————————
4. Convert the following infix expression to post fix notation ((a+2)*(b+4)) -1 (Similar types can be asked)
answere in webpage
————————
5. How is it possible to insert different type of elements in stack?
stack is a linear data structure in which data item is either inserted or deleted at one end
there are mainly two operations performed on stack.they’re
push
pop
push:writing a value to the stack is push or moving the stack pointer up to accomodatethe new item.
pop:reading a value from stack or moving the stack pointer down.
—————————————–
6. Stack can be described as a pointer. Explain.
Because stack will contain a head pointer which will always point to the top of the Stack. All Stack Operations are done using Head Pointer. Hence Stack can be Described as a Pointer
———————————
9. Explain about the types of linked lists
The types of linked lists are:
Singly linked list: It has only head part and corresponding references to the next nodes.
Doubly linked list: A linked list which both head and tail parts, thus allowing the traversal in bi-directional fashion. Except the first node, the head node refers to the previous node.
Circular linked list: A linked list whose last node has reference to the first node.
————————-
10. How would you sort a linked list?
Answer
Use merge sort
Use tree sort
or
Step 1: Compare the current node in the unsorted list with every element in the rest of the list. If the current element is more than any other element go to step 2 otherwise go to step 3.
Step 2: Position the element with higher value after the position of the current element. Compare the next element. Go to step1 if an element exists, else stop the process.
Step 3: If the list is already in sorted order, insert the current node at the end of the list. Compare the next element, if any and go to step 1 or quit.
————————-
13. What do you mean by Base case, Recursive case, Binding Time, Run-Time Stack and Tail Recursion?
Base case: A case in recursion, in which the answer is known when the termination for a recursive condition is to unwind back.
Recursive Case: A case which returns to the answer which is closer.
Run-time Stack: A run time stack used for saving the frame stack of a function when every recursion or every call occurs.
Tail Recursion: It is a situation where a single recursive call is consisted by a function, and it is the final statement to be executed. It can be replaced by iteration.
———————————
15. Explain binary searching, Fibinocci search.
Binary search is used to find an element of a sorted list only. For the element to be searched, the middle value is first compared. If it is same as the element to be sought, the search stops. Else, the same mechanism of search is performed on the left or right side elements of the middle elements depending on whether the sought value is greater than or less than the middle element.
Fibonacci search:- Fibonacci search is used to search an element of a sorted array with the help of Fibonacci numbers. It studies the locations whose addresses have lower dispersion. Fibonacci number is subtracted from the index thereby reducing the size of the list.
————————
16. What is the maximum total number of nodes in a tree that has N levels? Note that the root is level (zero)
IF EVERY NON-LEAF NODE IN A BINARY TREE HAS HAS NONEMPTY LEFT AND RIGHT SUBTREES, THE TREE IS TERMED AS A STRICTLY BINARY TREE. SUCH A TREE WITH n LEAVES ALWAYS CONTAINS 2n-1 NODES.
————————————–
17. How many different binary trees and binary search trees can be made from three nodes that contain the key values 1, 2 & 3?
Do you know any formula to calculate how many binary search trees are possible?
answer:
(2n C n) / (n+1) = ( factorial (2n) / factorial (n) * factorial (2n – n) ) / ( n + 1 )
where ‘n’ is number of element (integer/string)
like:
N Number of BST
1 1
2 2
3 5
4 14
5 42
6 132
and so on
——————————–
18. A list is ordered from smaller to largest when a sort is called. Which sort would take the longest time to execute?
Quick Sort
————————
19. A list is ordered from smaller to largest when a sort is called. Which sort would take the shortest time to execute?
bubble sort
——————————
20. When will you sort an array of pointers to list elements, rather than sorting the elements themselves?
If the Data structures that we are sorting are big and
located at different places then I prefer sorting pointers
rather than data itself
————————————
Interview Questions related to DBMS
1. What is normalization? Explain different levels of normalization?
Normalization:
The process of reducing a complex data structure into its simplest, most stable structure. In general, the process entails the removal of redundant attributes, keys, and relationships from a conceptual data model.
www.dmreview.com/resources/glossary.cf…
Normalization is the process of designing database tables to ensure that the fields in each table do not repeat, are fully identified by a unique KEY, and are not dependent on any non-key ATTRIBUTEs.
shr.aaas.org/DBStandards/glossary.html
Normalization helps in reducing data redundancy. As we move towards higher normalization
1NF: This type of normalization states that there must not be any duplicates in the tables that we use. In other words, all the tables used must have a primary key defined.
2NF: This type of normalization states that data redundancy can be reduced if attributes those are dependent on one of the keys of a composite primary key are isolated to a separate table. Not only does this reduces data redundancy but also helps in increasing data retention when a delete is done. For example, consider a table that has the following columns: Part Id, State, City, and Country. Here, assume Part Id & Country form the composite primary key. The attributes state & city depend only on the country. 2NF states that if such is the case then split the table into 2 tables. One with Part Id & country as the columns. Other with Country, state & city as the columns. In the 1st table if a delete is made to all the rows with Part Id = ‘X’ then we would lose country related data too. But in the 2nd case this would not happen.
3NF: This type of normalization states that if a dependency exists on certain attributes other than the primary key then the table split depending on the dependency has to be done. Consider the same example above. In the present case consider that Part Id is the only primary key. Now state, city depend only on country & not on Part Id. This table is already in 1NF & 2NF. But to achieve 3NF we would do the same split as above.
Level
Rule
First normal form (1NF)
An entity type is in 1NF when it contains no repeating groups of data.
Second normal form (2NF)
An entity type is in 2NF when it is in 1NF and when all of its non-key attributes are fully dependent on its primary key.
Third normal form (3NF)
An entity type is in 3NF when it is in 2NF and when all of its attributes are directly dependent on the primary key.
—————————————————
2. What is denormalization and when would you go for it?
The process of adding redundant data to get rid of complex join, in order to optimize database performance. This is done to speed up database access by moving from higher to lower form of normalization.
In other words, we can define De-Nomalization as :-
De-normalization is the process of attempting to optimize the performance of a database by adding redundant data. It’s used To introduce redundancy into a table in order to incorporate data from a related table. The related table can then be eliminated. De-normalization can improve efficiency and performance by reducing complexity in a data warehouse schema.
De-normalization is an application tool in SQL server model. There are three methods for de-normalization:.
• Entity inheritance
• Role expansion
• Lookup entities.
————————————-
3. How do you implement one-to-one, one-to-many and many-to-many relationships while designing tables?
One-to-one: Use a foreign key to the referenced table:
student: student_id, first_name, last_name, address_id
address: address_id, address, city, zipcode, student_id # you can have a
# “link back” if you need
One-to-many: Use a foreign key on the many side of the relationship linking back to the “one” side:
teachers: teacher_id, first_name, last_name # the “one” side
classes: class_id, class_name, teacher_id # the “many” side
Many-to-many: Use a junction table:
student: student_id, first_name, last_name
classes: class_id, name, teacher_id
student_classes: class_id, student_id # the junction table
————————————
4. What’s the difference between a primary key and a unique key?
Primary Key: A column in a table whose values uniquely identify the rows in the table. A primary key value cannot be NULL.
Unique Key: Unique Keys are used to uniquely identify each row in the table. There can be one and only one row for each unique key value. So
NULL can be a unique key.There can be only one primary key for a table but there can be more than one unique for a table.
—————————
5. What are user defined datatypes and when you should go for them?
User defined data types are based on system data types. They should be used when multiple tables need to store the same type of data in a column and you need to ensure that all these columns are exactly the same including length, and nullability.
Parameters for user defined datatype:
Name
System data type on which user defined data type is based upon.
Nullability
For example, a user-defined data type called post_code could be created based on char system data type.
——————————
6. What is bit datatype and what’s the information that can be stored inside a bit column?
Bit datatype is used to store boolean information like 1 or
0 (true or false).
Untill SQL Server 6.5 bit datatype could
hold either a 1 or 0 and there was no support for NULL. But
from SQL Server 7.0 onwards, bit datatype can represent a
third state, which is NULL.
—————————–
7. Define candidate key, alternate key, composite key.
* Alternate key – An alternate key is any candidate key which is not selected to be the primary key
* Candidate key – A candidate key is a field or combination of fields that can act as a primary key field for that table to uniquely identify each record in that table.
For Eg:
The table:
Emloyee(Name,Address,Ssn,Employee_Idprimary_key,Phone_ext)
In the above example Ssn no. and employee identity are ccandidate keys.
* Compound key – compound key (also called a composite key or concatenated key) is a key that consists of 2 or more attributes.
* Primary key – a primary key is a value that can be used to identify a unique row in a table. Attributes are associated with it. Examples of primary keys are Social Security numbers (associated to a specific person) or ISBNs (associated to a specific book).
In the relational model of data, a primary key is a candidate key chosen as the main method of uniquely identifying a tuple in a relation.
For Eg:
Emloyee(Name,Address,Ssn,Employee_Idprimary_key,Phone_ext)
* Superkey – A superkey is defined in the relational model as a set of attributes of a relation variable (relvar) for which it holds that in all relations assigned to that variable there are no two distinct tuples (rows) that have the same values for the attributes in this set. Equivalently a superkey can also be defined as a set of attributes of a relvar upon which all attributes of the relvar are functionally dependent.
For Eg:
Emloyee(Name,Address,Ssn,Employee_Idprimary_key,Phone_ext)
All the above are super keys.
* Foreign key – a foreign key (FK) is a field or group of fields in a database record that points to a key field or group of fields forming a key of another database record in some (usually different) table. Usually a foreign key in one table refers to the primary key (PK) of another table. This way references can be made to link information together and it is an essential part of database normalization.
For Eg:
For a Student….
School(Name,Address,Phone,School_Reg_noprimary_key
——————————
8. What are defaults? Is there a column to which a default can’t be bound?
A default is a value that will be used by a column, if no value is supplied to that column while inserting data. IDENTITY columns and timestamp columns can’t have defaults bound to them.
——————————
9. What are the different types of joins?
types of join:1.Self Join-when u need to join a table to itself2.Equi Join:-The relation between the employee table and dept table is an equi join that both tables must be equal3.Non equi Join:-The relationship is obtained using operator other than = operator.4.Outer Join:-The missing rows can be obtained is outer join operation is used.It returns rows from one table that have no direct match in other table.5.Cross Join:-produces cross product of two tables.6.Natural Join;-It is based on all columns in two tables that have same name or same column.7.Inner Join-To find out only matched columns.8.Full outer join:-inner join+left outer join+right outer join.
———————————-
11. What cursor type do you use to retrieve multiple recordsets?
we are using explicit cursor to retrieve multiple row
————————–
12. Diffrence between a “where” clause and a “having” clause
Short answer:
Use HAVING instead of WHERE when you want to establish a condition that involves a grouping (aggregating) function. (Grouping functions are things like count(), max(), sum(), etc.)
Longer answer:
Either WHERE or HAVING clauses allow you to restrict the data you’re selecting by establishing conditions.
(Usually people say stuff like “WHERE allows you to qualify on the original data in the tables and HAVING allows you to qualify on groups,” which is indeed how you typically use them, but that’s not 100% accurate.)
Conditions in WHERE clauses cannot contain grouping functions.
The conditions you establish in a HAVING clause *must* either (1) contain a grouping function, or (2) if the HAVING clause specifies table fields, those table fields must be in a GROUP BY clause.
—————————-
13. What is the difference between “procedure” and “function”?
The main difference is DML (Data Manipulation).
Through Stored procedure you can manipulate data i.e u can insert,update,delete records from table but in function u can’t manipulate data. then comes secondary differences like function returns a value and stored procedures does not until and unless u specify an output parameter.stores procedure is a precomplied function i.e it has to be complied oly once but a function when called is complied again and again.
—————————
23. How to remove duplicate records from a table?
Here’s something you might try if SQL Server version is 2005 or later.
WITH cte AS
(
SELECT {list-of-columns-in-table},
row_number() over (PARTITION BY {list-of-key-columns} ORDER BY {rule-to-determine-row-to-keep}) as sequence
FROM myTable
)
DELETE FROM cte
WHERE sequence > 1
This uses a common table expression (CTE) and adds a sequence column. {list-of-columns-in-table} is just as it states. Not all columns are needed, but I won’t explain here.
The {list-of-key-columns] is the columns that you use to define what is a duplicate.
{rule-to-determine-row-to-keep} is a sequence so that the first row is the row to keep. For example, if you want to keep the oldest row, you would use a date column for sequence.
Here’s an example of the query with real columns.
WITH cte AS
(
SELECT ID, CourseName, DateAdded,
row_number() over (PARTITION BY CourseName ORDER BY DateAdded) as sequence
FROM Courses
)
DELETE FROM cte
WHERE sequence > 1
This example removes duplicate rows based on the CoursName value and keeps the oldest basesd on the DateAdded value.
———————————————
28. When do you get constraint violtaion? What are the types of constraints?
Database returns an error message when a constraint is violated.
Differnet type of constrains;
1.Domain Constraints:A Domain constraint deals with one or more columns
2.Entity Constraints
3.Referential Integrity Constraints.
4.Key Constraints
4.1.Primary Key Constraints
4.2 Foreign Key Constraints
4.3 Unique key Constraints
4.4 Check constraint
4.5 Default constraint
——————————–
29. How to convert RAW datatype into TEXT?
You can use Bulk copy program (BCP) to convert a data from row to text or text to row
———————————
31. How functional dependency is related to database table design?
Normalization is the technique of breaking the complex tables into much understandable smaller one to improve the optimzation of the database structure. The concept of “Fucntional Dependancy” is followed in Second Normal Form (2NF) which means all the columns in the schema should be functionally dependent on the primary key of that schema. All the columns in the tables should have a relation ship with the main key. In this situation, all the columns are functionally dependednt.
Say for example, if you have an Employee table which has Empno, Ename, Salry columns. Here empno is the main key. Ename refers employee name and salary refers employee salary. Those are functionally dependent on Empno. If you include InvoiceNo in this table, it is irrelevent and also it is not functionally dependent on Empno.
As per Second Normal Form (2NF), all the non-functionally dependent columns to be removed from the schema and applied to the new table to increase the performance of the database.
“Functional Dependency” is followed in Second Normal Form (2NF) which means all the columns in the schema should be functionally dependent on the primary key of that schema.
e.g.
Given a relation R,attribute Y of R is functionally dependent on attribute X of R only if each X- value has associate with it precisely one -Y value in R.
———————————
32. What is a “trigger”?
A database trigger is procedural code that is automatically executed in response to certain events on a particular table or view in a database. The trigger is mostly used for maintaining the integrity of the information on the database. For example, when a new record (representing a new worker) is added to the employees table, new records should be created also in the tables of the taxes, vacations and salaries.
Triggers in Microsoft SQL Server
Microsoft SQL Server supports triggers either after or instead of an insert, update or delete operation. They can be set on tables and views with the constraint that a view can be referenced only by an INSTEAD OF trigger.
Microsoft SQL Server 2005 introduced support for Data Definition Language (DDL) triggers, which can fire in reaction to a very wide range of events, including:
Drop table
Create table
Alter table
Login events
A full list is available on MSDN.
Performing conditional actions in triggers (or testing data following modification) is done through accessing the temporary Inserted and Deleted tables.
—————————————-
35. What is a VIEW? How to get script for a view?
View is a logical partition of one or more table in horizontally(by putting condition) or vertically (by including selected columns).
View can be used to update table, provides security mechanism.
view defionition(qurey) stored as plan text or Encrypted text
We can see view definition by using following stored procedure
exec sp_helptext viewname
————————————
37. What is SQL*Loader?
SQL LOADER is an Oracle utility used to load data into table given a datafile which has the records that need to be loaded. SQL*Loader takes data file, as well as a control file, to insert data into the table. When a Control file is executed, it can create Three (3) files called a
log file, bad file or reject file, discard file.
Log file tells you the state of the tables and indexes and the number of logical records already read from the input datafile. This information can be used to resume the load where it left off.
Bad file or reject file gives you the records that were rejected because of formatting errors or because they caused Oracle errors.
Discard filespecifies the records that do not meet any of the loading criteria like when any of the WHEN clauses specified in the control file. These records differ from rejected records.
——————————–
38. Difference between variable character datatypes
char will append spaces to the end of whatever you store in the field to fill the full [20] bytes allocated.
varchar will only store the characters that you put into it.
Use char when the data values in a column are expected to be consistently close to the same size. (example being customer numbers “ABC1234”)
Use varchar when the data values in a column are expected to vary considerably in size. (example addresses comment fields…)
You would use char primarily for performance reasons. The computer (database server) can more quickly manipulate fixed length fields because it doesn’t have to do as much error checking and validation. The server always knows exactly how much memory to allocate for a fixed length field instead of calculating the length on the fly every time that field is accessed/updated.
—————————–
39. What is the difference among “dropping a table”, “truncating a table” and “deleting all records” from a table.
Here is a more complete answer: “dropping a table removes the table definition from the schema of the database and, of course, all the data truncating a table” internally within the database ‘marks’ the table as empty and this is imporant: the deleted records are not recorded in the transaction log.”deleting all records” removes the records from the table and records each deletion in the transaction log
—————————
40. Difference between “ORACLE” and “MICROSOFT ACCESS” databases.
There are many differences between Oracle and Access. First, you need to know that Oracle and Access are both relational database engines. Oracle is sold by Oracle Corporation and Access is sold by Microsoft. The biggest differences between the two are the size and number of users each supports. Oracle is a scalable enterprise database engine that supports from one to many (tens to hundreds of thousands) of simultaneous users. Access on the other hand is most commonly used as a single-user database but since the release of Access 97, can be augmented with Visual Basic code and can easily support 100+ simultaneous users (Top limit of 200 users).
Oracle can support very large databases (Tens of Terabytes in size or larger), while Access is designed for much smaller databases in the 1 to 2 gigabyte size range. Oracle has a wide range of support features that can be used to monitor and tune database performance – Access is much more limited in its performance tuning capabilities.
———————————–
46. What is blocking and how would you troubleshoot it?
SQL Server DBA – What is blocking and how would you troubleshoot it? – Feb 27, 2010 at 22:40 pm by Shuchi Gauri
What is blocking and how would you troubleshoot it?
Blocking occurs when a process has acquired lock on a set of rows, and another process is trying to acquire a lock on the same set of rows. In such a case, the other process has to wait until the first process finishes its job and releases the lock on the above said rows.
Use sp_lock procedure to see type of locks acquired by various sessions on the server to find the cause of blocking.
Problem is hinted by the WAIT status is a lot of rows that are returned as an output of sp_lock stored procedure execution.
Use sp_who and sp_who2 to return more columns to get more information around the blocking.
Use DBCC INPUTBUFFER (spid).This will show the last 128 characters of the last T-SQL statement executed from connection referred through spid. This way one can identify the stored procedure or application module that caused blocking.
To resolve blocking, you can disconnect the connection causing the blocking using KILL command. If this does not solve the problem permanently, then rewrite the stored procedure/module causing the block more efficiently.
———————————-
47. Explain CREATE DATABASE syntax
CREATE DATABASE DB1
LOGFILE
GROUP 1 ‘/oracle/oradata/db1/redo1a.log’ size 5m,
GROUP 2 ‘/oracle/oradata/db2/redo2a.log’ size 5m
TABLESPACE USERS datafile ‘/oracle/oradata/db1/system.dbf’
size 100m
AUTOEXTEND ON,
DEFAULT TEMPORARY TABLESPACE
tempfile ‘/oracle/oradata/db1/temp.dbf’ size 10m AUTOEXTEND
ON,
TABLESPACE UNDO datafile ‘/oracle/oradata/db1/undo.dbf’
size 10m
CHARACTER SET US7ASCII
NATIONAL CHARACTER SET US16UTF16;
—————————-
48. How to restart SQL Server in single user mode? How to start SQL Server in minimal configuration mode?
SQL Server can be started from command line, using the
SQLSERVR.EXE. This EXE has some very important parameters
with which a DBA should be familiar with. -m is used for
starting SQL Server in single user mode and -f is used to
start the SQL Server in minimal confuguration mode. Check
out SQL Server books online for more parameters and their
explanations.
———————————-
49. As a part of your job, what are the DBCC commands that you commonly use for database maintenance?
DBCC CHECKDB, DBCC CHECKTABLE, DBCC CHECKCATALOG, DBCC
CHECKALLOC, DBCC SHOWCONTIG, DBCC SHRINKDATABASE, DBCC
SHRINKFILE etc.
————————————-
Questions related to Computer Architecture-I
1. What is pipelining?
In computing, a pipeline is a set of data processing elements connected in series, so that the output of one element is the input of the next one. The elements of a pipeline are often executed in parallel or in time-sliced fashion; in that case, some amount of buffer storage is often inserted between elements.
Computer-related pipelines include:
Instruction pipelines, such as the classic RISC pipeline, which are used in processors to allow overlapping execution of multiple instructions with the same circuitry. The circuitry is usually divided up into stages, including instruction decoding, arithmetic, and register fetching stages, wherein each stage processes one instruction at a time.
Graphics pipelines, found in most graphics cards, which consist of multiple arithmetic units, or complete CPUs, that implement the various stages of common rendering operations (perspective projection, window clipping, color and light calculation, rendering, etc.).
Software pipelines, where commands can be written so that the output of one operation is automatically used as the input to the next, following operation. The Unix system call pipe is a classic example of this concept; although other operating systems do support pipes as well.
——————————-
2.What are the five stages in a DLX pipeline?
Answers:
The instruction sets can be differentiated by
* Operand storage in the CPU
* Number of explicit operands per instruction
* Operand location
* Operations
* Type and size of operands
Submitted by Sowjanya Rao ([email protected])
5 stages in DLX pipeling:
1. Instruction Fetch.
2. Instruction Decode.
3. Execute.
4. Memory Access and
5. Write Back.
—————————
3.For a pipeline with ‘n’ stages, what’s the ideal throughput? What prevents us from achieving this ideal throughput?
Answers:
With “n” stage pipeline the throughput should be “n” instructions.
As the pipe stages can’t be perfectly balanced ( time to perform task in a pipeline stage), furthermore pipeline does involve some
overheads.
——————————-
4.What are the different hazards? How do you avoid them?
There are situations, called hazards, that prevent the next instruction in the instruction stream from executing during its designated clock cycle. Hazards reduce the performance from the ideal speedup gained by pipelining. There are three classes of Hazards:
1. Structural Hazards: It arise from resource conflicts when the hardware cannot support all possible combinations of instructions simultaniously in ovelapped execution.
2. Data Hazards: It arise when an instruction depends on the results of previous instruction in a way that is exposed by the ovelapping of instructions in the pipeline.
3. Control Hazards: It arise from the pipelining of branches and other instructions that change the PC.
How to Avoid Hazards:
1. Structural Hazard: This arise when some functional unit is not fully pipelined. Then the sequence of instructions using that unpipelined unit cannot proceed at the rate of one one per clock cycle. Another common way that it may appear is when some resources are not duplicated enough to allow all combination of instructionsin the pipeline to execute. So by fully pipelining the stages and duplicating resouces will avoid structural pipeline.
2. Data Hazards: A major effect of pipelining is to change the relative timing of instructions by overlapping their execution. This overlap introduce the data and control hazards. Data hazards occur when the pipeline changes the order of read/write accesses to operands so that the order differs from the order seen by sequentially executing instructions on an unpipelined processor. It can be mimimized by simple hardware technique called forwarding or by adding stalls.
3. Control Hazards: They are also know as Branch Hazards. The simplest scheme to handle branches hazard is to freeze or flush the pipeline, holding or deleting any instructions after the branch until the branch destination is known. In this case branch penalty is fixed and cannot be reduced by software. The other scheme is predicted-not-taken or predicted-untaken and delayed branch.
—————————-
8.What is a cache?
Noun:
A collection of items of the same type stored in a hidden or inaccessible place.
Verb:
Store away in hiding or for future use.
Pronounced cash, a special high-speed storage mechanism. It can be either a reserved section of main memory or an independent high-speed storage device. Two types of caching are commonly used in personal computers: memory caching and disk caching.
A memory cache, sometimes called a cache store or RAM cache, is a portion of memory made of high-speed static RAM (SRAM) instead of the slower and cheaper dynamic RAM (DRAM) used for main memory. Memory caching is effective because most programs access the same data or instructions over and over. By keeping as much of this information as possible in SRAM, the computer avoids accessing the slower DRAM.
Some memory caches are built into the architecture of microprocessors. The Intel 80486 microprocessor, for example, contains an 8K memory cache, and the Pentium has a 16K cache. Such internal caches are often called Level 1 (L1) caches. Most modern PCs also come with external cache memory, called Level 2 (L2) caches. These caches sit between the CPU and the DRAM. Like L1 caches, L2 caches are composed of SRAM but they are much larger.
Disk caching works under the same principle as memory caching, but instead of using high-speed SRAM, a disk cache uses conventional main memory. The most recently accessed data from the disk (as well as adjacent sectors) is stored in a memory buffer. When a program needs to access data from the disk, it first checks the disk cache to see if the data is there. Disk caching can dramatically improve the performance of applications, because accessing a byte of data in RAM can be thousands of times faster than accessing a byte on a hard disk.
When data is found in the cache, it is called a cache hit, and the effectiveness of a cache is judged by its hit rate. Many cache systems use a technique known as smart caching, in which the system can recognize certain types of frequently used data. The strategies for determining which information should be kept in the cache constitute some of the more interesting problems in computer science.
———————————-
11.What is Virtual Memory?
In computing, virtual memory is a memory management technique developed for multitasking kernels. This technique virtualizes a computer architecture’s various forms of computer data storage (such as random-access memory and disk storage), allowing a program to be designed as though there is only one kind of memory, “virtual” memory, which behaves like directly addressable read/write memory (RAM).
Most modern operating systems that support virtual memory also run each process in its own dedicated address space. Each program thus appears to have sole access to the virtual memory. However, some older operating systems (such as OS/VS1 and OS/VS2 SVS) and even modern ones (such as IBM i) are single address space operating systems that run all processes in a single address space composed of virtualized memory.
Virtual memory makes application programming easier by hiding fragmentation of physical memory; by delegating to the kernel the burden of managing the memory hierarchy (eliminating the need for the program to handle overlays explicitly); and, when each process is run in its own dedicated address space, by obviating the need to relocate program code or to access memory with relative addressing.
Memory virtualization is a generalization of the concept of virtual memory.
Virtual memory is an integral part of a computer architecture; implementations require hardware support, typically in the form of a memory management unit built into the CPU. While not necessary, emulators and virtual machines can employ hardware support to increase performance of their virtual memory implementations.[1] Consequently, older operating systems, such as those for the mainframes of the 1960s, and those for personal computers of the early to mid 1980s (e.g. DOS),[2] generally have no virtual memory functionality,[dubious – discuss] though notable exceptions for mainframes of the 1960s include:
—————————-
12.What is Cache Coherency?
(cash cōhēr´&ns) (n.) A protocol for managing the caches of a multiprocessor system so that no data is lost or overwritten before the data is transferred from a cache to the target memory. When two or more computer processors work together on a single program, known as multiprocessing, each processor may have its own memory cache that is separate from the larger RAM that the individual processors will access. A memory cache, sometimes called a cache store or RAM cache, is a portion of memory made of high-speed static RAM (SRAM) instead of the slower and cheaper dynamic RAM (DRAM) used for main memory. Memory caching is effective because most programs access the same data or instructions over and over. By keeping as much of this information as possible in SRAM, the computer avoids accessing the slower DRAM.
When multiple processors with separate caches share a common memory, it is necessary to keep the caches in a state of coherence by ensuring that any shared operand that is changed in any cache is changed throughout the entire system. This is done in either of two ways: through a directory-based or a snooping system. In a directory-based system, the data being shared is placed in a common directory that maintains the coherence between caches. The directory acts as a filter through which the processor must ask permission to load an entry from the primary memory to its cache. When an entry is changed the directory either updates or invalidates the other caches with that entry. In a snooping system, all caches on the bus monitor (or snoop) the bus to determine if they have a copy of the block of data that is requested on the bus. Every cache has a copy of the sharing status of every block of physical memory it has.
Cache misses and memory traffic due to shared data blocks limit the performance of parallel computing in multiprocessor computers or systems. Cache coherence aims to solve the problems associated with sharing data.
————————
14.What is a Snooping cache?
Snooping is the process where the individual caches monitor address lines for accesses to memory locations that they have cached. When a write operation is observed to a location that a cache has a copy of, the cache controller invalidates its own copy of the snooped memory location.
Snarfing is where a cache controller watches both address and data in an attempt to update its own copy of a memory location when a second master modifies a location in main memory
————————
15.What are the components in a Microprocessor?
An input/output (I/O) unit, a control unit, and one or more arithmetic logic units (ALUs)
—————————
16.What is ACBF(Hex) divided by 16?
1. Convert to Decimal: (Ax16pow3)+(Cx16pow2)+(Bx16pow1)+(Fx16pow0)
2. Do not try to compute the whole result as any ways we have to divide the number by 16 which results in
(Ax16pow2)+(Cx16pow1)+(Bx16pow0)+(Fx16pow-1)
3. Now get the result as 2928.93 in decimal.
————————
17.Convert 65(Hex) to Binary
65 is converted to decimal by MULTIPLYING the individual digits with the correct exponents of 16.
6*(16^1) + 5*(16^0) = 101
We then convert 101 into binary which gives us
01100101
or
first take 6
6 binary number is acrroding to hexa=0110
5 binary number is acrroding to hexa=0101
last count togather =01100101
———————————-
19.The CPU is busy but you want to stop and do some other task. How do you do it?
In Windows .. Go to Task Manager Go to the process Click End Task .. Terminate the process and work other task.
———————-
Questions related to Computer Architecture-II
1. Explain what is DMA?
Direct memory access is system that can control the memory system without using the CPU. On a specified stimulus, the module will move data from one memory location or region to another memory location or region. While it is limited in its flexibility, there are many situations where automated memory access is much faster than using the CPU to manage the transfers. Systems like the ADC, DAC and PWM capturing all require frequent and regular movements of memory out of their respective systems. The DMA can be configured to handle moving the collected data out of the peripheral module and into more useful memory locations (like arrays).
Only memory can be accessed this way, but most peripheral systems, data registers, and control registers are accessed as if they were memory. The DMA is intended to be used in low power mode because it uses the same memory bus as the CPU and only one or the other can use the memory at the same time.
The DMA system is organized into three largely independent parts. Though the three compete for the same memory bus, they have can be configured for independent triggers and memory regions.
————————-
16. Different addressing modes.
Types of Addressing Modes
Each instruction of a computer specifies an operation on certain data. The are various ways of specifying address of the data to be operated on. These different ways of specifying data are called the addressing modes. The most common addressing modes are:
Immediate addressing mode
Direct addressing mode
Indirect addressing mode
Register addressing mode
Register indirect addressing mode
Displacement addressing mode
Stack addressing mode
To specify the addressing mode of an instruction several methods are used. Most often used are :
a) Different operands will use different addressing modes.
b) One or more bits in the instruction format can be used as mode field. The value of the mode field determines which addressing mode is to be used.
The effective address will be either main memory address of a register.
Immediate Addressing:
This is the simplest form of addressing. Here, the operand is given in the instruction itself. This mode is used to define a constant or set initial values of variables. The advantage of this mode is that no memory reference other than instruction fetch is required to obtain operand. The disadvantage is that the size of the number is limited to the size of the address field, which most instruction sets is small compared to word length.
INSTRUCTION
OPERAND
Direct Addressing:
In direct addressing mode, effective address of the operand is given in the address field of the instruction. It requires one memory reference to read the operand from the given location and provides only a limited address space. Length of the address field is usually less than the word length.
Ex : Move P, Ro, Add Q, Ro P and Q are the address of operand.
Indirect Addressing:
Indirect addressing mode, the address field of the instruction refers to the address of a word in memory, which in turn contains the full length address of the operand. The advantage of this mode is that for the word length of N, an address space of 2N can be addressed. He disadvantage is that instruction execution requires two memory reference to fetch the operand Multilevel or cascaded indirect addressing can also be used.
Register Addressing:
Register addressing mode is similar to direct addressing. The only difference is that the address field of the instruction refers to a register rather than a memory location 3 or 4 bits are used as address field to reference 8 to 16 generate purpose registers. The advantages of register addressing are Small address field is needed in the instruction.
Register Indirect Addressing:
This mode is similar to indirect addressing. The address field of the instruction refers to a register. The register contains the effective address of the operand. This mode uses one memory reference to obtain the operand. The address space is limited to the width of the registers available to store the effective address.
Displacement Addressing:
In displacement addressing mode there are 3 types of addressing mode. They are :
1) Relative addressing
2) Base register addressing
3) Indexing addressing.
This is a combination of direct addressing and register indirect addressing. The value contained in one address field. A is used directly and the other address refers to a register whose contents are added to A to produce the effective address.
Stack Addressing:
Stack is a linear array of locations referred to as last-in first out queue. The stack is a reserved block of location, appended or deleted only at the top of the stack. Stack pointer is a register which stores the address of top of stack location. This mode of addressing is also known as implicit addressing.
———————————
24. What are the types of memor y management ?
in pdf