Developments in information technology have begun to pose new challenges for policing. Most professions have had to adapt to the digital age, and the police profession must be particularly adaptive, because criminal exploitation of digital technologies necessitates new types of criminal investigation.
More and more, information technology is becoming the instrument of criminal activity. Investigating these sophisticated crimes, and assembling the necessary evidence for presentation in a court of law, will become a significant police responsibility.
The application of computer technology to the investigation of computer-based crime has given rise to the field of forensic computing. This paper provides an overview of the field of forensic computing.
Given the increasing opportunities for computer-based crime in contemporary Australia, a number of related emerging issues need serious consideration by Australian law enforcement. Many agencies throughout Australia must recognize the contribution that forensic computing can make in the investigation of crime, and in turn must ensure that such a contribution is supported and positively promoted.
Failure to do so will see those agencies falling behind technologically competent criminals who readily recognize the advantages of using new technologies in the commission of crime…
The Essay on To what extent does the law balance out the rights of the victims, offenders and society in the criminal investigation process?
The role of the criminal investigation process is to balance the rights of the victims and offenders in society. All individuals’ wether victim, offender or member of society have basic rights to which the law attempts to adhere to. While all are individual, the rights will differ for the purpose of maintaining a balance in society. Though upholding the rights of the people is essential in order ...
The art of true memory is the art of attention.
Your brain never loses anything. Once it is put in your memory, it remains there for the rest of your life. What we call “forgetting” is either the inability to recall stored information or the failure to store information.
1. Learn from the general to the specific.
Scan through the whole book and get an idea of the course content. Then start learning chapter by chapter or section by section.
1. Make it meaningful.
How much this mechanics course relates to your goal of becoming a mechanical engineer?
When the information helps you to get something you want, it is easier to remember that.
2. Create associations
Think of something related to the topic and create associations with it. This helps the memory process.
USE YOUR BODY
3. Learn it once, actively.
Involve your body the first time you learn something. Sit up, sit on the edge of your chair or walk while studying. Some believe that the brain works better in a standing position.
When we are relaxed ,we absorb new information quicker and recall it with greater accuracy.
5. Create pictures
Use diagrams and cartoons to connect facts and illustrate relationships.
6. Recite and repeat
When you recite something aloud, you anchor the information by hearing and by seeing. The combined result is synergetic. When you repeat something, you create a trail through the pathways of your brain, making the information easier to find
7. Write it down.
Writing a note to yourself helps you to remember an idea, even if you never look at the note again.
USE YOUR BRAIN
8. Reduce interference
Don’t sit near a TV or a refrigerator while studying. These may distract you.
9. Escape the short term memory gap.
A short review within minutes or hours of a study session can move material from the short term memory to the long term memory.
The Essay on Studying Information Systems
Studying Information Systems at undergraduate level involves looking at how information and communications technology can be used to achieve strategic goals. The focus is on the design, development, and implementation of cutting-edge solutions to solve important organizational problems. Information Systems research looks at both the technical and organizational aspects of information ...
10. Distribute learning
Marathon study sessions are not very effective. You can get far more done in three one hour
sessions than in one three hour session. Change subject of study after one hour to a new subject.
11. Beware of attitudes
People who believe that math is difficult will have difficulty in remembering maths. We remember what we find interesting.
12. Use daylight for study
Study the most difficult subjects during day hours. The early morning hours can be more productive for most people.
Learn more than what is needed. Do the assigned problems and then do more problems.
14. Choose what not to store in memory
Decide what is essential to remember from a reading assignment or lecture .
Extract the core concepts. Then apply the memory techniques to those concepts .
15. Combine memory techniques
Choose two or three techniques to use on a particular assignment. Experiment for yourself with these techniques.
16. Remember something else
When you are stuck up and can’t remember something you know you know, remember something else that is related to it. If you can’t remember your aunt’s name, then try to remember your great aunt’s name.
18. Notice when you do remember
Some people are best at remembering information they have read. Others remember best what they have seen, heard or done. Notice your style and follow it.
19. Use it before you lose it
To remember something, access it a lot. Read it, write it, speak it, listen to it, apply it.
20. And remember you never forget
Develop a strong belief that you will never forget it. Keep saying “my memory serves me well” . Or even “I never forget.”
3D PC Glasses
Only a few years ago, seeing in 3-D meant peering through a pair of red-and-blue glasses, or trying not to go cross-eyed in front of a page of fuzzy dots. It was great at the time, but 3-D technology has moved on. Scientists know more about how our vision works than ever before, and our computers are more powerful than ever before — most of us have sophisticated components in our computer that are dedicated to producing realistic graphics. Put those two things together, and you ll see how 3-D graphics have really begun to take off.
The Term Paper on Development Of Computers And Technology
Computers in some form are in almost everything these days. From Toasters to Televisions, just about all electronic things has some form of processor in them. This is a very large change from the way it used to be, when a computer that would take up an entire room and weighed tons of pounds has the same amount of power as a scientific calculator. The changes that computers have undergone in the ...
Most computer users are familiar with 3-D games. Back in the 90s, computer enthusiasts were stunned by the game Castle Wolfenstein 3D, which took place in a maze-like castle. It may have been constructed from blocky tiles, but the castle existed in three dimensions — you could move forward and backward, or hold down the appropriate key and see your viewpoint spin through 360 degrees. Back then, it was revolutionary and quite amazing. Nowadays, gamers enjoy ever more complicated graphics — smooth, three-dimensional environments complete with realistic lighting and complex simulations of real-life physics grace our screens.
But that s the problem — the screen. The game itself may be in three dimensions, and the player may be able to look wherever he wants with complete freedom, but at the end of the day the picture is displayed on a computer monitor…and that s a flat surface.
That s where PC 3-D glasses come in. They re designed to convince your brain that your monitor is showing a real, three-dimensional object. In order to understand quite how this works, we need to know what sort of work our brain does with the information our eyes give it. Once we know about that, we ll be able to understand just how 3-D glasses do their job.
A T M
These computers include the entire spectrum of PCs, through professional workstations upto super-computers. As the performance of computers has increased, so too has the demand for communication between all systems for exchanging data, or between central servers and the associated host computer system. The replacement of copper with fiber and the advancement sin digital communication and encoding are at the heart of several developments that will change the communication infrastructure. The former development has provided us with huge amount of transmission bandwidth. While the latter has made the transmission of all information including voice and video through a packet switched network possible.
The Essay on Information Technology: Structures of Data
The home page of the ITPRC Web site introduces it as “The Information Technology Professional’s Resource Center” (ITPRC, 2007). Originally created in 1999, the avowed purpose of the Web site is to provide a one-stop shop for IT professionals for technical information on data networking. The purpose is laudable because data networking is a vital function which underlies many of the devices and ...
With continuously work sharing over large distances, including international communication, the systems must be interconnected via wide area networks with increasing demands for higher bit rates.
For the first time, a single communications technology meets LAN and WAN requirements and handles a wide variety of current and emerging applications. ATM is the first technology to provide a common format for bursts of high speed data and the ebb and flow of the typical voice phone call. Seamless ATM networks provide desktop-to-desktop multimedia networking over single technology, high bandwidth, low latency network, removing the boundary between LAN WAN.
ATM is simply a Data Link Layer protocol. It is asynchronous in the sense that the recurrence of the cells containing information from an individual user is not necessarily periodic. It is the technology of choice for evolving B-ISDN (Board Integrated Services Digital Network), for next generation LANs and WANs. ATM supports transmission speeds of 155Mbits / sec. In the future. Photonic approaches have made the advent of ATM switches feasible, and an evolution towards an all packetized, unified, broadband telecommunications and data communication world based on ATM is taking place.
The Ajax technique uses a combination of:
XHTML (or HTML) and CSS, for marking up and styling information.
The XMLHttpRequest object to exchange data asynchronously with the web server. In some Ajax frameworks and in certain situations, an IFrame object is used instead of the XMLHttpRequest object to exchange data with the web server.
XML is sometimes used as the format for transferring data between the server and client, although any format will work, including preformatted HTML, plain text, JSON and even EBML.
The Essay on System Elements-Data Roles In System Development
The design of a system varies in response to the expected audience for the perticulare application. Some systems are intended for back rooms, some for the front office, and some are for the general public. They are designed for technical users, others for end users. Some are intended to work standalone in real-time control applications, others for an environment of timesharing and pervasive ...
Like DHTML, LAMP and SPA, Ajax is not a technology in itself, but a term that refers to the use of a group of technologies together.
Elliptical curve cryptography (ECC)
ECC is a public key encryption technique based on elliptic curve theory. ECC can be used to create faster, smaller and more efficient cryptographic keys. It generates keys through the properties of the elliptic curve equation rather than the traditional method of generation, as the product of very large prime numbers. This technology can be used in conjunction with most of the public key encryption methods such as RSA and Diffie-Hellman.
ECC can yield a level of security with a 164-bit key compared with other systems that require a 1,024-bit key. Since ECC provides an equivalent security at a lower computing power and battery resource usage, it is widely used for mobile applications. ECC was developed by Certicom, a mobile e-business security provider and was recently licensed by Hifn, a manufacturer of integrated circuitry and network security products. Many manufacturers, including 3COM, Cylink, Motorola, Pitney Bowes, Siemens, TRW and VeriFone have incorporated support for ECC in their products .
Generic visual perception processor
Generic visual perception processor is a single chip modeled on the perception capabilities of the human brain, which can detect objects in a motion video signal and then locate and track them in real time. Imitating the human eye s neural networks and the brain, the chip can handle about 20 billion instructions per second. This electronic eye on the chip can handle a task that ranges from sensing the variable parameters as in the form of video signals and then process it for controlling purpose.
Hyper Transport Technology
This describes AMD s Hyper Transport™ technology, a new I/O architecture for personal computers, workstations, servers, high-performance networking and communications systems, and embedded applications. This scalable architecture can provide significantly increased bandwidth over existing bus architectures and can simplify in-the-box connectivity by replacing legacy buses and bridges. The programming model used in Hyper Transport technology is compatible with existing models and requires little or no changes to existing operating system and driver software.
The Term Paper on Operating System Network Systems Administration
The Role of Operating Systems and Network Administration in the IS Curriculum. Robert Adams and Carl Erickson Grand Valley State University Department of Computer Science and Information Systems Allendale, MI 49401 USA Abstract The reliance by companies of all sizes on information technology creates strong demand for system and network administration jobs. Information System majors will ...
It provides a universal connection designed to reduce the number of buses within the system. It is designed to enable the chips inside of PCs and networking and communications devices to communicate with each other up to 48 times faster than with existing technologies. Hyper Transport technology is truly the universal solution for in-the-box connectivity.
>> It is a new I/O architecture for personal computers, workstations, servers, embedded applications etc.
>> It is a scalable architecture can provide significantly increased.
bandwidth over existing bus architectures .
>> It simplify in-the-box connectivity by replacing legacy buses and bridges.
>> The programming model used in Hyper Transport technology is compatible with existing models and requires little or no changes to existing operating system and driver software.
Hyper Transport technology provides high speeds while maintaining full software and operating system compatibility with the Peripheral Component Interconnect (PCI) interface that is used in most systems today. In older multi-drop bus architectures like PCI, the addition of hardware devices affects the overall electrical characteristics and bandwidth of the entire bus. Even with PCI-X1.0, the maximum supported clock speed of 133MHz must be reduced when more than one PCI-X device is attached. Hyper Transport technology uses a point-to-point link that is connected between two devices, enabling the overall speed of the link to transfer data much faster
Kerberos In a non-networked personal computing environment resources and information can be protected by physically securing the personal computer. But in a network of users requiring services from many computers the identity of each user has to be accurately verified. For authentication kerberos is being used. Kerberos is a third party authentication technology used to identify a user requesting a service. METASPLOIT
The Metasploit Project is an open source computer security project which provides information about security vulnerabilities and aids in penetration testing and IDS signature development. Its most well-known sub-project is the Metasploit Framework, a tool for developing and executing exploit code against a remote target machine.
Real Time Operating System
A real time system is defined as follows – A real-time system is one in which the correctness of the computations not only depends upon the logical correctness of the computation but also upon the time at which the result is produced. If the timing constraints of the system are not met, system failure is said to be occurred.
Two types Hard real time operating system Strict time constraints Secondary storage limited or absent Conflicts with the time sharing systems Not supported by general purpose OS Soft real time operating system Reduced Time Constraints Limited utility in industrial control or robotics Useful in applications (multimedia, virtual reality) requiring advanced operating-system features. In the robot example, it would be hard real time if the robot arriving late causes completely incorrect operation. It would be soft real time if the robot arriving late meant a loss of throughput. Much of what is done in real time programming is actually soft real time system. Good system design often implies a level of fe/correct behaviour even if the computer system never completes the computation. So if the computer is only a little late, the system effects may be somewhat mitigated.
Hat makes an os a rtos?
1. A RTOS (Real-Time Operating System) has to be multi-threaded and preemptible.
2. The notion of thread priority has to exist as there is for the moment no deadline driven OS.
3. The OS has to support predictable thread synchronisation mechanisms
4. A system of priority inheritance has to exist
5. For every system call, the maximum it takes. It should be predictable and independent from the number of objects in the system
6. the maximum time the OS and drivers mask the interrupts. The following points should also be known by the developer:
1. System Interrupt Levels.
2. Device driver IRQ Levels, maximum time they take, etc.
The MBMS The MBMS is a unidirectional point to multipoint bearer service in which data is transmitted from a single source entity to multiple recipients. These services will typically be in the form of streaming video and audio and should not be confused with the CBS (Cell Broadcast Service) that is currently supported. This paper describes the architecture of the MBMS along with its functional notes and integration into 3G and GERAN (GSM & EDGE Radio Access Network) with Core Network, UTRAN (UMTS Terrestrial Radio Access Network) and radio aspects being explained.
Voice Over Internet Protocol
VoIP, or Voice over Internet Protocol refers to sending voice and fax phone calls over data networks, particularly the Internet. This technology offers cost savings by making more efficient use of the existing network.
Traditionally, voice and data were carried over separate networks optimized to suit the differing characteristics of voice and data traffic. With advances in technology, it is now possible to carry voice and data over the same networks whilst still catering for the different characteristics required by voice and data.
Voice-over-Internet-Protocol (VOIP) is an emerging technology that allows telephone calls or faxes to be transported over an IP data network. The IP network could be
A local area network in an office
A wide area network linking the sites of a large international organization
A corporate intranet
Any combination of the above
There can be no doubt that IP is here to stay. The explosive growth of the Internet, making IP the predominate networking protocol globally, presents a huge opportunity to dispense with separate voice and data networks and use IP technology for voice traffic as well as data. As voice and data network technologies merge, massive infrastructure cost savings can be made as the need to provide separate networks for voice and data can be eliminated.
Most traditional phone networks use the Public Switched Telephone Network(PSTN), this system employs circuit-switched technology that requires a dedicated voice channel to be assigned to each particular conversation. Messages are sent in analog format over this network.
Today, phone networks are on a migration path to VoIP. A VoIP system employs a packet-switched network, where the voice signal is digitized, compressed and packetized. This compressed digital message no longer requires a voice channel. Instead, a message can be sent across the same data lines that are used for the Intranet or Internet and a dedicated channels is no longer needed. The message can now share bandwidth with other messages in the network.
Normal data traffic is carried between PC s, servers, printers, and other networked devices through a company s worldwide TCP/IP network. Each device on the network has an IP address, which is attached to every packet for routing. Voice-over-IP packets are no different.
Users may use appliances such as Symbol s NetVision phone to talk to other IP phones or desktop PC-based phones located at company sites worldwide, provided that a voice-enabled network is installed at the site. Installation simply involves assigning an IP address to each wireless handset.
VOIP lets you make toll-free long distance voice and fax calls over existing IP data networks instead of the public switched telephone network (PSTN).
Today business that implement their own VOIP solution can dramatically cut long distance costs between two or more locations
Wireless Markup Language When its time to find out how to make content available over WAP, we need to get to grips with its Markup Language. ie, WML. WML was designed from the start as a markup language to describe display of content on small screen devices.
It is a Markup language enabling the formatting of text in WAP environment using a variety of markup tags to determine the display appearance of content. WML is defined using the rules of XML-extensible markup language and therefore an XML application. WML provides a means of allowing the user to navigate around the WAP application and supports the use of anchored links as found commonly in the web pages. It also provides support for images and layout within the constraints of the device
B-ISDN Reference Model
ATM makes B-ISDN a reality. The Integrated services Digital Network (ISDN) evolved during the 80 s. It carried a basic channel that could operate at 64kbps (B-channel) and combinations of this and others (D-channels) formed the basis of communication on the network. In the new B-ISDN world, this is supposed to supply data, voice and other communication services over a common network with a wide range of data speeds. To understand a lot of the terminology in ATM-land, it is necessary to understand the B-ISDN Reference Model. Just as the ISO seven-layer model defines the layers for network software, this model defines layers for the ATM network.
The header is broken up into the following fields.
Generic Flow Control (GFC)
Virtual Channel Identifier (VCI)
Virtual Path Identifier (VPI)
Payload type (PT)
Cell Loss Priority (CLP)
Header Error Control (HEC)
Network – to – Network interface
It is necessary for the switches to know how to send the calls along. There are several techniques that could be adopted, but the most useful one for the 1P users is called Private Network-to Network Interface (PNNI)The PNNI is an interface between switches used to distribute information about the state and structure of the network to establish circuit to ensure that reasonable bandwidth and Qos contract can be established and to provide for some network management functions. Convergence Sublayer: The function provided at this layer differ depending on the service provided. It provides bit error correction and may use explicit time stamps to transfer timing information.
Segmentation and reassembly sublayer:
At this layer the convergence sublayer-protocol data unit is segmented and a header added. The header contains 3 fields Sequence Number used to detect cell insertion and cell loss. Sequence Number protection used to correct and detect errors that occur in the sequence number. Convergence sublayer indication used to indicate the presence of the convergence sublayer function.
Future use of biometric technology for security and authontication Biometric technology is the technology which is based on the samples of the human body. This is based on the things which every person is having different to the any other person. And using this technology is far more better than using any other technology.
Genetic programming (GP) is an automated methodology inspired by biological evolution to find computer programs that best perform a user-defined task. It is therefore a particular machine learning technique that uses an evolutionary algorithm to optimize a population of computer programs according to a fitness landscape determined by a program’s ability to perform a given computational task. The first experiments with GP were reported by Stephen F. Smith (1980) and Nichael L. Cramer (1985), as described in the famous book Genetic Programming: On the Programming of Computers by Means of Natural Selection by John Koza (1992).
Computer programs in GP can be written in a variety of programming languages. In the early (and traditional) implementations of GP, program instructions and data values were organized in tree-structures, thus favoring the use of languages that naturally embody such a structure (an important example pioneered by Koza is Lisp).
Other forms of GP have been suggested and successfully implemented, such as the simpler linear representation which suits the more traditional imperative languages [see, for example, Banzhaf et al. (1998)]. The commercial GP software Discipulus, for example, uses linear genetic programming combined with machine code language to achieve better performance. Differently, the MicroGP uses an internal representation similar to linear genetic programming to generate programs that fully exploit the syntax of a given assembly language.
GP is very computationally intensive and so in the 1990s it was mainly used to solve relatively simple problems. However, more recently, thanks to various improvements in GP technology and to the well known exponential growth in CPU power, GP has started delivering a number of outstanding results. At the time of writing, nearly 40 human-competitive results have been gathered, in areas such as quantum computing, electronic design, game playing, sorting, searching and many more. These results include the replication or infringement of several post-year-2000 inventions, and the production of two patentable new inventions.
Developing a theory for GP has been very difficult and so in the 1990s genetic programming was considered a sort of pariah amongst the various techniques of search. However, after a series of breakthroughs in the early 2000s, the theory of GP has had a formidable and rapid development. So much so that it has been possible to build exact probabilistic models of GP (schema theories and Markov chain models) and to show that GP is more general than, and in fact includes, genetic algorithms.
Genetic Programming techniques have now been applied to evolvable hardware as well as computer programs.
Meta-Genetic Programming is the technique of evolving a genetic programming system using genetic programming itself. Critics have argued that it is theoretically impossible, but more research is needed
Inferno(new operating system) Inferno is answering the current and growing need in the marketplace for distributed computing solutions. Based on more than 20 years of Bell Labs research into operating systems and programming languages, Inferno is poised to propel network computing into the 21st century. Bell Labs will continue to support the evolution of Inferno under a joint development agreement with Vita Nuova. Inferno is an operating system for creating and supporting distributed services. It was originally developed by the Computing Science Research Center of Bell Labs, the R&D arm of Lucent Technologies, and further developed by other groups in Lucent. Inferno was designed specifically as a commercial product, both for licensing in the marketplace and for use within new Lucent offerings. It encapsulates many years of Bell Labs research in operating systems, languages, on-the-fly compilers, graphics, security, networking and portability.
Lightweight Directory Access Protocol
DAP is actually a simple protocol that is used to access directory services. It is an open, vendor neutral information such as e-mail addresses and public keys for secure transmission of data. The information contained within an LDAP directory could be ASCII text files, JPEG photographs or sound files. One way to reduce the time taken to search for information is to replicate the directory information over different platforms so that the process of locating a specific data is streamlined and more resilient to failure of connections and computers. This is what is done with information in an LDAP structure.
LDAP, Lightweight Directory Access Protocol, is an Internet protocol runs over TCP/IP that e-mail programs use to lookup contact information from a server. A directory structure is a specialized database, which is optimized for browsing, searching, locating and reading information. Thus LDAP make it possible to obtain directory information such as e-mail addresses and public keys. LDAP can handle other information, but at present it is typically used to associate names with phone numbers and e-mail addresses.
LDAP is a directory structure and is completely based on entries for each piece of information. An entry is a collection of attributes that has a globally-unique Distinguished Name (DN).
The information in LDAP is arranged in a hierarchical tree-like structure. LDAP services are implemented by using the client-server architecture. There are options for referencing and accessing information within the LDAP structure. An entry is referenced by the type of its uniquely distinguishable name. Unlike the other directory structure, which allows the user access to all the information available, LDAP allows information to be accessed only after authenticating the user. It also supports privacy and integrity security services. There are two daemons for LDAP which are slapd and slurpd.
THE LDAP DOMAIN THE COMPONENTS OF AN LDAP DOMAIN A small domain may have a single LDAP server, and a few clients. The server commonly runs slapd, which will serve LDAP requests and update data. The client software is comprised of system libraries translating normal lib calls into LDAP data requests and providing some form of update functionality .Larger domains may have several LDAP slaves (read-only replicas of a master read/write LDAP server).
For large installations, the domain may be divided into sub domains, with referrals to â€˜glueâ€™ the sub domains together. THE STRUCTURE OF AN LDAP DOMAIN A simple LDAP domain is structured on the surface in a manner similar to an NIS domain; there are masters, slaves, and clients. The clients may query masters or slaves for information, but all updates must go to the masters. The â€˜domain nameâ€™ under LDAP is slightly different than that under NIS. LDAP domains may use an organization name and country.
The clients may or may not authenticate themselves to the server when performing operations, depending on the configuration of the client and the type of information requested. Commonly access to no sensitive information (such as port to service mappings) will be unauthenticated requests, while password information requests or any updates are authenticated. Larger organizations may subdivide their LDAP domain into sub domains. LDAP allows for this type of scalability, and uses â€˜referralsâ€™ to allow the passing off of clients from one server to the next (the same method is used by slave servers to pass modification requests to the master).
Mesotechnology Mesotechnology describes a budding research field which could replace nanotechnology in the future as the primary means to control matter at length scales ranging from a cluster of atoms to microscopic elements. The prefix meso- comes from the Greek word mesos, meaning middle, hence the technology spans a range of length scales as opposed to nanotechnology which is concerned only with the smallest atomic scales.
describes very well phenomena on the atomic to nanoscale while classical Newtonian Mechanics describes the behavior of objects on the microscale and up. However, the length scale in the middle ( Although the term itself is still quite new, the general concept is not. Many fields of science have traditionally focused either on single discrete elements or large statistical collections where many theories have been successfully applied. In the field of physics for example, Quantum Mechanicsmesoscale) is not well described by either theory. Similarly, psychologists focus heavily on the behavior and mental processes of the individual while sociologists study the behavior of large societal groups, but what happens when only 3 people are interacting, this is the mesoscale.
PLAN 9 Operating system
By the mid 1980 s, the trend in computing was away from large centralized time-shared computers towards networks of smaller, personal machines, typically UNIX `workstations . People had grown weary of overloaded, bureaucratic timesharing machines and were eager to move to small, self-maintained systems, even if that meant a net loss in computing power. As microcomputers became faster, even that loss was recovered, and this style of computing remains popular today.
Plan 9 began in the late 1980 s as an attempt to have it both ways: to build a system that was centrally administered and cost-effective using cheap modern microcomputers as its computing elements. The idea was to build a time-sharing system out of workstations, but in a novel way. Different computers would handle different tasks: small, cheap machines in people s offices would serve as terminals providing access to large, central, shared resources such as computing servers and file servers. For the central machines, the coming wave of shared-memory multiprocessors seemed obvious candidates.
Plan 9 is designed around this basic principle that all resources appear as files in a hierarchical file system, which is unique to each process. As for the design of any operating system various things such as the design of the file and directory system implementation and the various interfaces are important. Plan 9 has all these well-designed features. All these help to provide a strong base for the operating system that could be well suited in a distributed and networked environment.
The different features of Plan 9 operating system are:
The dump file system makes a daily snapshot of the file store available to the users.
Unicode character set supported throughout the system.
Advanced kernel synchronization facilities for parallel processing.
Security- there is no super-user or root user and the passwords are never sent over the network
SALT (Speech Application Language Tags) SALT stands for Speech Application Language Tags. It consists of small set of XML elements with associated attributes and DOM object properties, events and methods which apply a speech interface to web pages. SALT allows applications to be run on a wide variety of devices and also through different methods for inputting data.
The main design principle of SALT include reuse the existing standards for grammar, speech output and also separation of the speech interface from business logic and data etc. SALT is designed to run inside different Web execution environments. So SALT does not have any predefined execution model but it uses an event-wiring model.
It contains a set of tags for inputting the data as well as storing and manipulating that data. The main elements of a SALT document are , and . Using these elements we can specify grammar for inputting data , inspect the results of recognition and copy those results properly and provide the application needed.The architecture of SALT contains mainly 4 components . The SAT (SIM Application Toolkit)
The SAT (SIM Application Toolkit) provides a flexible interface through which developers can build services and MMI (Man Machine Interface) in order to enhance the functionality of the mobile. This module is not designed for service developers, but network engineers who require a grounding in the concepts of the SAT and how it may impact on network architecture and performance. It explores the basic SAT interface along with the architecture required in order to deliver effective SAT based services to the handset.
Wireless Application Protocol
The Wireless Application Protocol (WAP) is a result of the WAP Forum s effort to promote industry-wide specifications for technology useful in developing applications and services that operates over wireless communication networks. WAP specifies an application framework and network protocols for wireless devices such as mobile telephones, pagers, and personal digital assistants. (PDAs).
The specifications extend and leverage mobile networking technologies (such as digital data networking standards) and Internet technologies (such as XML, URLs, scripting, and various content formats).
The effort is aimed at enabling operation, manufactures, and content developers to meet the challenges in building advanced differentiated services and implementation in a fast and flexible manner.
The Objectives of the WAP Forum are: To bring Internet content and advanced data services to digital cellular phones and other wireless terminals. To create a global wireless protocol specifications that will work across differing wireless network technologies To enable the creation of content and applications that scale across a very wide range of bearer networks and device types. To embrace and extend existing standards and technology wherever appropriate.
The WAP Architecture specification is intended to present the system and protocol architectures essential to achieving the objective of the WAP Forum.
WAP is positioned at the convergence of two rapidly evolving network technologies, wireless data and Internet. Both the wireless data market and the Internet are growing very quickly and are continuously reaching new customers. The explosive growth of the Internet has fuelled the creation of new and exciting information services
Most of the technology developed for the Internet has been designed for desktop and larger computers and medium to high bandwidth, generally reliable data networks. Mass-market, hand held wireless devices present a more constrained computing environment compared to desktop computers. Because of fundamental invitation of power and form factor, mass market handheld devices tend to have:
Less powerful CPUs, Less memory (ROM and RAM), Restricted power consumption, Smaller displays, and Different input devices (eg. a phone keypad).
Similarly, wireless data networks present a more constrained communication environment compared to wired networks. Because of fundamental limitation of power available spectrum, and mobility, wireless data networks tend to have: Less bandwidth, More latency, Less connection stability, and Less predictable availability.
Mobile networks are growing in complexity and the cost of all aspects for provisioning of more value added services is increasing. In order to meet the requirements of mobile network operators, solutions must be:
Interoperable-terminals from different manufactures communicate with services in the mobile network;
Scalable-mobile network operators are able to scale services to customer needs;
Efficient-provides quality of service suited to the behaviour and characteristics of the mobile network;
Reliable – provides a consistent and predictable platform for deploying services; and Secure-enables services to be extended over potentially unprotected mobile networks still preserving the integrity of user data; protects the devices and services from security problems such as denial of service.
The WAP specifications address mobile network characteristics and operator needs by adapting existing network technology to the special requirements of mass market, hand-held wireless data devices and by introducing new technology where appropriate
The requirements of the WAP Forum architecture are to:
Leverage existing standards where possible;
Define a layered, scalable and extensible architecture;
Support as many wireless networks as possible;
Optimise for narrow-band bearers with potentially high latency;
Optimise for efficient use of device resources (low memory / CPU usage / power consumption);
Provide support for secure application and communications;
Enable the creation of Man Machine Interfaces (MIMs) with maximum flexibility and vendor control;
Provide access to local handset functionality, such as logical indication for incoming call;
Facilitate network-operator and third party service provisioning;
Support multi-vendor interoperability by defining the optional and mandatory components of the specification
UMA (Unlicensed Mobile Access) UMA (Unlicensed Mobile Access) is an industry collaboration to extend GSM and GPRS services nto customer sites by utilizing unlicensed radio technologies such as Wi-Fi (Wireless Fidelity) and Bluetooth®. This is achieved by tunnelling GSM and GPRS protocols through a broadband IP network towards the Access Point situated in the customer site and across the unlicensed radio link to the mobile device.
Thus UMA provides an additional access network to the existing GERAN (GSM EDGE Radio Access Network) and UTRAN (UMTS Terrestrial Radio Access Network).
SIP(Session Initiation Protocol)
Session Initiation Protocol (SIP) is a protocol developed by IETF MMUSIC Working Group and proposed standard for initiating, modifying, and terminating an interactive user session that involves multimedia elements such as video, voice, instant messaging, online games, and virtual reality.
SIP clients traditionally use TCP and UDP port 5060 to connect to SIP servers and other SIP endpoints. SIP is primarily used in setting up and tearing down voice or video calls. However, it can be used in any application where session initiation is a requirement. These include, Event Subscription and Notification, Terminal mobility and so on. There are a large number of SIP-related RFCs that define behavior for such applications. All voice/video communications are done over RTP.
A motivating goal for SIP was to provide a signaling and call setup protocol for IP-based communications that can support a superset of the call processing functions and features present in the public switched telephone network (PSTN).
SIP enabled telephony networks can also implement many of the more advanced call processing features present in Signalling System 7 (SS7), though the two protocols themselves are very different. SS7 is a highly centralized protocol, characterized by highly complex central network architecture and dumb endpoints (traditional telephone handsets).
SIP is a peer-to-peer protocol.
SIP network elements
Hardware endpoints, devices with the look, feel, and shape of a traditional telephone, but that use SIP and RTP for communication, are commercially available from several vendors. Some of these can use Electronic Numbering (ENUM) or DUNDi to translate existing phone numbers to SIP addresses using DNS, so calls to other SIP users can bypass the telephone network, even though your service provider might normally act as a gateway to the PSTN network for traditional phone numbers (and charge you for it).
SIP makes use of elements called proxy servers to help route requests to the user s current location, authenticate and authorize users for services, implement provider call-routing policies, and provide features to users.
SIP also provides a registration function that allows users to upload their current locations for use by proxy servers.
Since registrations play an important role in SIP, a User Agent Server that handles a REGISTER is given the special name registrar.
It is an important concept that the distinction between types of SIP servers is logical, not physical.