I made this widget at MyFlashFetish.com.

Khamis, 14 April 2011

Biometric Hand Scanning

What is  HAND SCANNING??
Hand scanning involves the measurement and analysis of the shape of one's hand. It is a fairly straight forward procedure and is surprisingly accurate. Though it requires special hardware to use, it can be easily integrated into other devices or systems.



In the process of hand scanning, 
------->>  User will places the palm of hand on a metal surface which has guidance pegs on it. 
------>>  Hand will be aligned properly by the pegs so that the device can read the hand attributes.
-------->> The devie then checks its database to verify the user.
-------->>> This will take less than 5 seconds
NOTE: current bometric palm scanners or hand scanners do not have any way to detect whether a hand is living or not and therefore can be fooled by a fake hand if pressure is applied to the plate correctly.

HISTORY of HAND SCANNING

Hand scanning can be termed as a forefather of modern biometrics by virtue of a 20 yr history of live applications. There have been six different hand scanning products developed over this span, including some of the most commercially successful biometrics to date.


USE of BIOMETRIC HAND SCANNERS SYSTEM

Biometric hand scanning systems are employed at over 8,000 locations including the Colombian legislatures, San Francisco International Airport, day care centers, a sperm bank, welfare agencies, hospitals, and immigration facilities for the INSPASS frequent international traveler system.


EVALUATION OF BIOMETRIC HAND SCANNING

Hand geometry has several advantages. It is very easy for users to work the system - requiring nothing more than placing one's hand on the device. It has no public attitude problems as it is associated most commonly with authorized access. The amount of data required to uniquely identify a user in a system is the smallest by far allowing it to be used with SmartCards easily. It is also quite resistant to attempt to fool the system. The time and energy required to sufficiently emulate a person's hand is generally too much to be worth the effort, especially since it is generally used for verification purposes only.

Some disadvantages to biometric hand readers, including it's proprietary hardware cost and required size. Also, while injuries to hands can cause difficulty in using the reader effectively, the lack of accuracy in general requires that it be used for verification alone. In fact, because of the small amount of information measured, it is possible to have duplicate readings if enough people are put into the system, therefore elilminating its use as an identification system.



Computer and Information System Manager


In the modern workplace, it is imperative that Information Technology (IT) works both effectively and reliably. Computer and information systems managers play a vital role in the implementation and administration of technology within their organizations. They plan, coordinate, and direct research on the computer-related activities of firms. In consultation with other managers, they help determine the goals of an organization and then implement technology to meet those goals. They oversee all technical aspect of an organization, such as software development, network security, and Internet operations.
Computer and information systems managers direct the work of other IT professionals, such as computer software engineer and computer programmers, computer system analysts, and computer support specialist. They plan and coordinate activities such as installing and upgrading hardware and software, programming and systems design, the implementation of computer networks, and the development of Internet and intranet sites. They are increasingly involved with the upkeep, maintenance, and security of networks. They analyze the computer and information needs of their organizations from an operational and strategic perspective and determine immediate and long-range personnel and equipment requirements. They assign and review the work of their subordinates and stay abreast of the latest technology to ensure that the organization remains competitive.
Computer and information systems managers can have additional duties, depending on their role within an organization. Chief technology officers (CTOs), for example, evaluate the newest and most innovative technologies and determine how these can help their organizations. They develop technical standards, deploy technology, and supervise workers who deal with the daily information technology issues of the firm. When a useful new tool has been identified, the CTO determines one or more possible implementation strategies, including cost-benefit and return on investment analyses, and presents those strategies to top management, such as the chief information officer (CIO)

Management information systems (MIS) directors or information technology (IT) directors manage computing resources for their organizations. They often work under the chief information officer and plan and direct the work of subordinate information technology employees. These managers ensure the availability, continuity, and security of data and information technology services in their organizations. In this capacity, they oversee a variety of technical departments, develop and monitor performance standards, and implement new projects.
IT project managers develop requirements, budgets, and schedules for their firm’s information technology projects. They coordinate such projects from development through implementation, working with their organization’s IT workers, as well as clients, vendors, and consultants. These managers are increasingly involved in projects that upgrade the information security of an organization.
Work environment
Computer and information systems managers generally work in clean, comfortable offices. Long hours are common, and some may have to work evenings and weekends to meet deadlines or solve unexpected problems; in 2008, about 25 percent worked more than 50 hours per week. Some computer and information systems managers may experience considerable pressure in meeting technical goals with short deadlines or tight budgets. As networks continue to expand and more work is done remotely, computer and information systems managers have to communicate with and oversee offsite employees using laptops, e-mail, and the Internet.
Injuries in this occupation are uncommon, but like other workers who spend considerable time using computers, computer and information systems managers are susceptible to eyestrain, back discomfort, and hand and wrist problems such as carpal tunnel syndrome.

Tools Used in Programming

Programming is the process of scheduling, planning and writing a computer program. With the help of programming, a computer programmer can create a sequence of commands that tells that computer processor what to do. Programmers use a variety of tools that help prevent the occurrence of mistakes, commonly known as computer bugs. These tools also convert the language used by the computer programmer into a language that the computer can understand.

Code Editor 

The code editor is a tool designed for code writing and editing. All programming software programs usually come with a code editor. The editor adapts to the language the programmer uses. It allows the user to insert body of codes using the keyboard or mouse. The code editor comes with a feature called code colors that allow the programmer to differentiate sections of the code.

Compiler 

The compiler defines the instructions that are acceptable in a program. It converts a high-level language into machine code, the only set of syntax understood by the computer processor. The compiler allows a programmer to make programs using high-level languages. It uses the output, called source code, to generate a series of commands written in binary bits. The compiler looks at the source code to collect, reorganize and generate a new set of instructions to make the program run faster on the computer.

Interpreter 

The interpreter executes a source code written in high-level language without going through compilation stage. The interpreter allows the programmer to test the program quickly, allowing him to see the results before adding new sections to the code. Programmers prefer to use the interpreter during the development stages of the programs they are writing. An interpreter immediately translates the source code and then executes it. With the use of an interpreter, there is a significant reduction in the amount of time a programmer has to devote into programming.

Decompiler 

The decompiler reverses the process done by the compiler. It translates the machine code into a high-level language to create a presentation of the program. A programmer uses a decompiler to detect vulnerabilities and malicious codes, verify code matches, revise binary code bits and learn algorithm. Programmers use the decompiler as a form of maintenance and security whenever they write programs.

Parser 

The parser analyzes the structure of statements in the source code written by the programmer. The parser compares the string to a rule in English grammar in order to define potential code structures. During the parsing process, the computer looks for a particular constituent and consults the rules governing English grammar in order to look for alternatives. Parsing also applies to other languages such as French and German. However, the results are not as seamless as the ones generated in the English language.


Ahad, 10 April 2011

What is DO WHILE and DO UNTIL?

Basic types of loops consist of DO WHILE and DO UNTIL

What can be said about DO WHILE loop is a continues executing the body of the loop as long as the comparison test is true but for the DO UNTIL loop is executes the loop as long as the comparison test is false. In other words, doing something until a condition is TRUE is the same as doing something while a condition is FALSE.
Do While X <= 10 is the same as Do Until X > 10.

PROGRAMING LANGUAGE



WHAT IS FIFTH-GENERATION PROGRAMMING LANGUAGE (5GL)
5GL is a programming language based around solving problems using constraints given to the program, rather than using an algorithm written by a programmer. Most constraint-based and logic programming languages and some declarative languages are fifth-generation languages. Fifth-generation languages are used mainly in artificial intelligence research. Prolog, OPS5, and Mercury are the best known fifth-generation languages.

BENEFIT OF 5GL
  • Most logic and constraint based programming languages fall under the 5 GL umbrella
  • Allows the computer to independently work on a given program without the assistance of a separate set of code, created specifically for the purpose.
  • Accomplish more with less effort than in other languages of earlier generations; most efficient way to program for reasons of platform architecture limitations, OS dependencies and compiler availability
  • It also may have other drawbacks that mean it can't be reliably used in some mission critical software.
  • Appropriate to the project being coded. For example, assembly language; a second generation language, is appropriate in embedded applications where data storage and timing are in need of exact control. C is a "no-brainer" if you intend to use the Win32 API since it is its native language, so to speak. 


You should also know the other programing language called Mercury. CLICK the link below : http://www.knowledgerush.com/kr/encyclopedia/Mercury_programming_language/

Sabtu, 9 April 2011

DATA ORGANIZATION


KEY FIELD

WHAT IS KEY FIELD
  • A Key Field or Primary Key is the field in a record that uniquely identifies each record
EXAMPLE OF KEY SECURITY
          1. Social Security Number
          2. Student Identification Numbers
          3. Employee Identification Numbers
          4. National Identification Numbers


How Social Security Numbers Work

  • As each SSN is unique, the number's initial record-keeping role for the worker and the government to track Social Security entitlements, broadened.
  • The nine-digits of the SSN are divided into three parts, each separated by a hyphen. 
  • The first three digits, the area numbers, are based upon the zip code in the applicant's mailing address on the original application form. 
  • The second two numbers are the group numbers, ranging from 01 to 99, which serve to break the SSNs with the same area numbers into more manageable blocks. 
  • The final four digits of the SSN, the serial numbers, run consecutively from 0001 to 9999 within each group designation.

SOICIAL SECURITY IN MALAYSIA

The Social Security Organization (SOCSO) provides social security protection by social insurance including medical and cash benefits, provision of artificial aids and rehabilitation to employees to reduce the sufferings and to provide financial guarantees and protection to the family.


 STUDENT IDENTIFICATION NUMBERS 


                A student number is the number assigned to a student upon first entering or registering with an educational institution and is used to identify the student in lieu of a name for grades, essays, projects, exams and official documents.

               For Universiti Teknologi Malaysia (UTM)students their Matrix Card start with 'A' for undergraduate intake student but for direct intake student start with 'B'. H for Degree student. Master student's matrix card number start with 'M'. For UTMKL student their matrix number starte with AD, D for Diploma student.


              For example, BH100063; B for direct intake student; H for Degree; 10  for Academic Year when the number was issued, it refer to 2010; last 4 digit is refer to individual number issued in the order of enrollment.

EMPLOYEE IDENTIFICATION NUMBERS

An Employer Identification Number (EIN) is a nine-digit number that IRS assigns in the following format:
XX-XXXXXXX. It is used to identify the tax accounts of employers and certain others who have no employees. However, for employee plans, an alpha (for example, P) or the plan number (e.g., 003) may follow the EIN. 

The IRS uses the number to identify taxpayers that are required to file various business tax returns. EINs are used by employers, sole proprietors, corporations, partnerships, non-profit associations, trusts, estates of decedents, government agencies, certain individuals, and other business entities. 


NATIONAL IDENTIFICATION NUMBER


            In Malaysia, a 12-digit number (format: YYMMDD-SS-###G, since 1991) known as the National Registration Identification Card Number (NRIC No.) is issued to citizens and permanent residents on a MyKad. Prior to January 1, 2004, a separate social security (SOCSO) number (also the old IC number in format 'S#########', S denotes state of birth or country of origin (alphabet or number), # is a 9-digit serial number) was used for social security-related affairs.

           The first group of numbers (YYMMDD) are the date of birth. The second group of numbers (SS) represents the place of birth of the holder - the states (01-13), the federal territories (14-16) or the country of origin (60-85) . The last group of numbers (###G) is a serial number in an unidentified pattern which is randomly generated. The last digit (G) is an odd number for a male, while an even number is given for a female. For example 890123-03-7889; indicate that he born in 23 January 1989, in Kelantan, last digit represent that he is male.

Ahad, 3 April 2011

DATABASE MANAGEMENT SYSTEM

http://upload.wikimedia.org/wikipedia/commons/d/d4/Button_hide.png
 Database Management System (DBMS)
is a set of computer programs that controls the creation, maintenance and the use of a database.
It allows organizations to place control of database development in the hands of database administrators (DBAs) and other specialists. 
A DBMS is a system software package that helps the use of integrated collection of data records and files known as databases. 
 In large systems, a DBMS allows users and other software to store and retrieve data in a structured way 
 Common use of a database system is to track information about users, their name, login information, various addresses and phone numbers. 











Instead of having to write computer programs to extract information, user can ask simple questions in a query language.
So what is query language???
allows users to interactively interrogate the database, analyze its data and update it according to the users privilages on data. 
It also controls the security of the database. 
Data security prevents unauthorized users from viewing or updating the database. 
Using passwords users are allowed access to the entire database or subsets of it called subschemas. 
For example an employee database can contain all the data about an individual employee but one group of users may be authorized to view only payroll data, 
while others are allowed access to only work history and medical data.  



There are some standard functions of a DBMS that can be shared:

The ability to update and retrieve data
This is a fundamental component of a DBMS and essential to database management. Without the ability to view or manipulate data, there would be no point to using a database system.
Updating data in a database includes adding new records, deleting existing records and changing information within a record.

Support Concurrent Updates
Concurrent updates occur when multiple users make updates to the database simultaneously. Supporting concurrent updates is also crucial to database management as this component ensures that updates are made correctly and the end result is accurate. Without DBMS intervention, important data could be lost and/or inaccurate data stored. DBMS uses features to support concurrent updates such as batch processing, locking, two-phase locking, and time stamping to help make certain that updates are done accurately.

Recovery of Data
In the event a catastrophe occurs, DBMS must provide ways to recover a database so that data is not permanently lost. There are times computers may crash, a fire or other natural disaster may occur, or a user may enter incorrect information invalidating or making records inconsistent. If the database is destroyed or damaged in any way, the DBMS must be able to recover the correct state of the database, and this process is called Recovery. The easiest way to do this is to make regular backups of information. 


Metadata

METADATA

The term Metadata is an ambiguous term which is used for two fundamentally different concepts (Types). Although an expression "data about data" is often used, it does not apply to both in the same way. Structural metadata, the design and specification of data structures, cannot be about data, because at design time the application contains no data. In this case the correct description would be "data about the containers of data". Descriptive metadata on the other hand, is about individual instances of application data, the data content. In this case, a useful description (resulting in a disambiguating neologism) would be "data about data contents" or "content about content" thus metacontent. Descriptive, Guide and the NISO concept of administrative metadata are all subtypes of metacontent.
Metadata (metacontent) is traditionally found in the card catalogues of libraries. By describing the contentscontext of data files, the quality of the original data/files is greatly increased. For example, a webpage and may include metadata specifying what language it's written in, what tools were used to create it, and where to go for more on the subject, allowing browsers to automatically improve the experience of users.

Definition

Metadata (metacontent) is defined as data providing information about one or more aspects of the data, such as:
  • Means of creation of the data
  • Purpose of the data
  • Time and date of creation
  • Creator or author of data
  • Placement on a computer network where the data was created
  • Standards used
For example, a digital image may include metadata that describes how large the picture is, the color depth, the image resolution, when the image was created, and other data. A text document's metadata may contain information about how long the document is, who the author is, when the document was written, and a short summary of the document.
Metadata is data. As such, metadata can be stored and managed in a database, often called a registry or repository. However, it is impossible to identify metadata just by looking at it because a user would not know when data is metadata or just data.

Libraries

Metadata has been used in various forms as a means of cataloging archived information. The Dewey Decimal System employed by libraries for the classification of library materials is an early example of metadata usage. Library catalogues used 3x5 inch cards to display a book's title, author, subject matter, and a brief plot synopsis along with an abbreviated alpha-numeric identification system which indicated the physical location of the book within the library's shelves. Such data helps classify, aggregate, identify, and locate a particular book. Another form of older metadata collection is the use by US Census Bureau of what is known as the "Long Form." The Long Form asks questions that are used to create demographic data to create patterns and to find patterns of distribution.  The term was coined in 1968 by Philip Bagley, one of the pioneers of computerized document retrieval. Since then the fields of information management, information science, information technology, librarianship and GIS have widely adopted the term. In these fields the word metadata is defined as "data about data". While this is the generally accepted definition, various disciplines have adopted their own more specific explanation and uses of the term.
For the purposes of this article, an "object" refers to any of the following:
  • A physical item such as a book, CD, DVD, map, chair, table, flower pot, etc.
  • An electronic file such as a digital image, digital photo, document, program file, database table, etc.

Photographs

Metadata may be written into a digital photo file that will identify who owns it, copyright & contact information, what camera created the file, along with exposure information and descriptive information such as keywords about the photo, making the file searchable on the computer and/or the Internet. Some metadata is written by the camera and some is input by the photographer and/or software after downloading to a computer.
Photographic Metadata Standards are governed by organizations that develop the following standards. They include, but are not limited to:
  • IPTC Information Interchange Model IIM (International Press Telecommunications Council),
  • IPTC Core Schema for XMP
  • XMP – Extensible Metadata Platform (an Adobe standard)
  • Exif – Exchangeable image file format, Maintained by CIPA (Camera & Imaging Products Association) and published by JEITA (Japan Electronics and Information Technology Industries Association)
  • Dublin Core (Dublin Core Metadata Initiative – DCMI)
  • PLUS (Picture Licensing Universal System)

Video

Metadata is particularly useful in video, where information about its contents (such as transcripts of conversations and text descriptions of its scenes) are not directly understandable by a computer, but where efficient search is desirable.

Web pages

Web pages often include metadata in the form of meta tags. Description and keywords meta tags are commonly used to describe the Web page's content. Most search engines use this data when adding pages to their search index.

Creation of metadata

Metadata can be created either by automated information processing or by manual work. Elementary metadata captured by computers can include information about when a file was created, who created it, when it was last updated, file size and file extension.

Metadata types

The metadata application is manifold covering a large variety of fields of application there are nothing but specialised and well accepted models to specify types of metadata. Bretheron & Singley (1994) distinguish between two distinct classes: structural/control metadata and guide metadata.Structural metadata is used to describe the structure of computer systems such as tables, columns and indexes. Guide metadata is used to help humans find specific items and is usually expressed as a set of keywords in a natural language. According to Ralph Kimball metadata can be divided into 2 similar categories—Technical metadata and Business metadata. Technical metadata correspond to internal metadata, business metadata to external metadata. Kimball adds a third category named Process metadata. On the other hand, NISO distinguishes between three types of metadata: descriptive, structural and administrative Descriptive metadata is the information used to search and locate an object such as title, author, subjects, keywords, publisher; structural metadata gives a description of how the components of the object are organised; and administrative metadata refers to the technical information including file type. Two sub-types of administrative metadata are rights management metadata and preservation metadata.

Metadata structures

Metadata (metacontent), or more correctly, the vocabularies used to assemble metadata (metacontent) statements, is typically structured according to a standardised concept using a well defined metadata scheme, including: metadata standards and metadata models. Tools such as controlled vocabularies, taxonomies, thesauri, data dictionaries and metadata registries can be used to apply further standardisation to the metadata.

Metadata syntax

Metadata (metacontent) syntax refers to the rules created to structure the fields or elements of metadata (metacontent). A single metadata scheme may be expressed in a number of different markup or programming languages, each of which requires a different syntax. For example, Dublin Core may be expressed in plain text, HTML, XML and RDF
A common example of (guide) metacontent is the bibliographic classification, the subject, the Dewey Decimal class number. There is always an implied statement in any "classification" of some object. To classify an object as, for example, Dewey class number 514 (Topology) (e.g. a book has this number on the spine) the implied statement is: "<book><subject heading><514>. This is a subject-predicate-object triple, or more importantly, a class-attribute-value triple. The first two elements of the triple (class, attribute) are pieces of some structural metadata having a defined semantic. The third element is a value, preferrably from some controlled vocabulary, some reference (master) data. The combination of the metadata and master data elements results in a statement which is a metacontent statement ie. "metacontent = metadata + master data". All these elements can be thought of as "vocabulary". Both metadata and master data are vocabularies which can be assembled into metacontent statements. There are many sources of these vocabularies, both meta and master data: UML, EDIFACT, XSD, Dewey/UDC/LoC, SKOS, ISO-25964, Pantone, Linnaean Binomial Nomenclature etc. Using controlled vocabularies for the components of metacontent statements, whether for indexing or finding, is endorsed by ISO-25964: "If both the indexer and the searcher are guided to choose the same term for the same concept, then relevant documents will be retrieved." This is particularly relevant when considering that the behemoth of the internet, Google, is simply indexing then matching text strings, there is no intelligence or "inferencing" occurring.

Hierarchical, linear and planar schemata

Metadata schemas can be hierarchical in nature where relationships exist between metadata elements and elements are nested so that parent-child relationships exist between the elements. An example of a hierarchical metadata schema is the IEEE LOM schema where metadata elements may belong to a parent metadata element. Metadata schemas can also be one dimensional, or linear, where each element is completely discrete from other elements and classified according to one dimension only. An example of a linear metadata schema is Dublin Core schema which is one dimensional. Metadata schemas are often two dimensional, or planar, where each element is completely discrete from other elements but classified according to two orthogonal dimensions.[9]

Metadata hypermapping

In all cases where the metadata schemata exceed the planar depiction, some type of hypermapping is required to enable display and view of metadata according to chosen aspect and to serve special views. Hypermapping frequently applies to layering of geographical and geological information overlays.

Granularity

Granularity is a term that applies to data as well as to metadata. The degree to which metadata is structured is referred to as its granularity. Metadata with a high granularity allows for deeper structured information and enables greater levels of technical manipulation however, a lower level of granularity means that metadata can be created for considerably lower costs but will not provide as detailed information. The major impact of granularity is not only on creation and capture, but moreover on maintenance. As soon as the metadata structures get outdated, the access to the referred data will get outdated. Hence granularity shall take into account the effort to create as well as the effort to maintain.

Metadata standards

International standards apply to metadata. Much work is being accomplished in the national and international standards communities, especially ANSI (American National Standards Institute) and ISO (International Organization for Standardization) to reach consensus on standardizing metadata and registries.
The core standard is ISO/IEC 11179-1:2004 [11] and subsequent standards (see ISO/IEC 11179). All yet published registrations according to this standard cover just the definition of metadata and do not serve the structuring of metadata storage or retrieval neither any administrative standardisation. It is important to note that this standard refers to metadata as data about containers of data and not to metadata (metacontent) as data about data contents. It should also be noted that this standard describes itself originally as a "data element" registry, describing disembodied data elements, and explicitly disavows the capability of containing complex structures. Thus the original term "data element" is more applicable than the later applied buzzword "metadata".

Metadata usage

  • Data Virtualization

Data Virtualization has emerged as the new software technology to complete the virtualization stack in the enterprise. Metadata is used in Data Virtualization servers which are enterprise infrastructure components, along side with Database and Application servers. Metadata in these servers is saved as persistent repository and describes business objects in various enterprise systems and applications.
  • Statistics and census services

Standardisation work has had a large impact on efforts to build metadata systems in the statistical community. Several metadata standards are described, and their importance to statistical agencies is discussed. Applications of the standards at the Census Bureau, Environmental Protection Agency, Bureau of Labor Statistics, Statistics Canada, and many others are described. Emphasis is on the impact a metadata registry can have in a statistical agency.
  • Library and information science

Libraries employ metadata in library catalogues, most commonly as part of an Integrated Library Management System. Metadata is obtained by cataloguing resources such as books, periodicals, DVDs, web pages or digital images. This data is stored in the integrated library management system, ILMS, using the MARC metadata standard. The purpose is to direct patrons to the physical or electronic location of items or areas they seek as well as to provide a description of the item/s in question.
More recent and specialised instances of library metadata include the establishment of digital libraries including e-print repositories and digital image libraries. While often based on library principles the focus on non-librarian use, espcially in providing metadata means they do not follow traditional or common cataloguing approaches. Given the custom nature of included materials metadata fields are often specially created e.g. taxonomic classification fields, location fields, keywords or copyright statement. Standard file information such as filesize and format are usually automatically included.
Standardisation for library operation has been a key topic in international standardisation (ISO) for decades. Standards for metadata in digital libraries include Dublin Core, METS, MODS, DDI, ISO standard Digital Object Identifier (DOI), ISO standard Uniform Resource Name (URN), PREMIS schema, and OAI-PMH. Leading libraries in the world give hints on their metadata standards strategies.


FIREWALL...~!~!~


firewall is a secure and trusted hardware/software application which is used to block unwanted logical ports and manage unwanted traffic. In simple words firewall is a barrier to  keep destructive forces away from your server which may be loading your server. A firewall is useful if you don’t want eternal users to access a particular host or service.it useful to filter packets based on source’s IP address.Its used securing your data and server from being get hacked. Its most common use is to prevent denial of service attack, you can prevent certain level of DDOS attacks for your server...
You can consider firewall as a program or a device which can monitor, control all data transfer between any internal network with whole internet, A firewall ensures that all data communication in both directions is accordingly to security rules set.
Firewall technologies are configurable thus you can limit communication by direction, IP address, protocol, ports, or numerous other combination. There can be communication failure if data communication is not as per the rule set for all outgoing and incoming requests.  If you have access to the firewall of you server , you can configure it to enable the ports, protocols, and addresses that may optimize data communication.
As for security reasons you can configure your firewall to allow only TCP traffic which may cause the user to see frequent buffering of clips. User experience of the presentation is compromised; greater latency and startup times affect the time needed to view the clip, and delivery of the clip requires more total bandwidth.

General types of Firewall :

Desktop Firewall :


Almost all operating system have built-in firewall applications which is responsible for protecting standalone machine, This type of firewall is designed to protect a single desktop computer and is of a great protection mechanism if the network firewall is compromised.


Software Firewall :

As the name suggest it’s a software application which can be installed on a serverwhich is capable of preventing unwanted data communication. Many webmaster don’t prefer software firewall as most secure. This type of firewall is often used as an application firewall which means that the firewall is optimized to protect applications such as web application and email servers. All Software firewalls have built-in complex filters to inspect the content of the network traffic. This type of firewall is usually (but not always) behind hardware firewalls.


Hardware Firewall :

A hardware firewall is a dedicated hardware device which is installed on your system/server. These firewalls consist of network routers with additional firewall capabilities. Usually hardware firewalls are designed to handle heavy network traffic sites. Hardware firewalls are often placed on the perimeter of the network to filter the internet noise and only allow pre-determined traffic into the network.


Many-a-times hardware firewalls are used in conjunction with software firewalls thus hardware firewall filters out the traffic and the software firewall inspects the network traffic thus you get pure traffic and unwanted traffic can be blocked easily. If in case hardware firewalls are bombarded with bogus network traffic her comes the role of hardware firewall they drop this unwanted traffic and will only let in the right traffic for your network/server thus hardware firewall not only protects the software firewall but also allows the software firewall only to inspect proper network traffic thus such hardware and software firewall combination will surely optimizes the network throughput.

Retrieved from : 



Ahad, 27 Mac 2011

What is Office Automation Systems (OASs)

Office automation systems (OAS) are configurations of networked computer hardware and software. A variety of office automation systems are now applied to business and communication functions that used to be performed manually or in multiple locations of a company, such as preparing written communications and strategic planning. 

In addition, functions that once required coordinating the expertise of outside specialists in typesetting, printing, or electronic recording can now be integrated into the everyday work of an organization, saving both time and money.

Types of functions integrated by office automation systems include :

(1) electronic publishing
(2) electronic communication
(3) electronic collaboration
(4) image processing
(5) office management. 

At the heart of these systems is often a local area network (LAN). The LAN allows users to transmit data, voice, mail, and images across the network to any destination, whether that destination is in the local office on the LAN, or in another country or continent, through a connecting network. An OAS makes office work more efficient and increases productivity.

Electronic Publishing
Electronic publishing systems include word processing and desktop publishing.

Word processing software, (e.g.Microsoft Word, Corel Word-Perfect) allows users to create, edit, revise, store, and print documents such as letters, memos, reports, and manuscripts. 

Desktop publishing software (e.g., Adobe Pagemaker, Corel VENTURA, Microsoft Publisher) enables users to integrate text, images, photographs, and graphics to produce high-quality printable output. Desktop publishing software is used on a microcomputer with a mouse, scanner, and printer to create professional-looking publications. These may be newsletters, brochures, magazines, or books.

Electronic Communication
Electronic communication systems include electronic mail (e-mail), voice mail, facsimile (fax), and desktop videoconferencing.

Electronic Collaboration
Electronic collaboration is made possible through electronic meeting and collaborative work systems and teleconferencing.

Electronic meeting and collaborative work systems allow teams of coworkers to use networks of microcomputers to share information, update schedules and plans, and cooperate on projects regardless of geographic distance. 

Special software called groupware is needed to allow two or more people to edit or otherwise work on the same files simultaneously.

Image Processing
Image processing systems include electronic document management, presentation graphics, and multimedia systems. Imaging systems convert text, drawings, and photographs into digital form that can be stored in a computer system. 
This digital form can be manipulated, stored, printed, or sent via a modem to another computer. Imaging systems may use scanners, digital cameras, video capture cards, or advanced graphic computers. Companies use imaging systems for a variety of documents such as insurance forms, medical records, dental records, and mortgage applications.

Office Management
Office management systems include electronic office accessories, electronic scheduling, and task management. These systems provide an electronic means of organizing people, projects, and data. Business dates, appointments, notes, and client contact information can be created, edited, stored, and retrieved. 

Additionally, automatic reminders about crucial dates and appointments can be programmed. Projects and tasks can be allocated, subdivided, and planned. All of these actions can either be done individually or for an entire group. Computerized systems that automate these office functions can dramatically increase productivity and improve communication within an organization.