The algorithm used in GSM mobile phones. Alex Biryukov and Adi Shamir have shown this algorithm to be insecure (9/12/99).
“A5/1 is the strong version of the encryption algorithm used by about 100 million GSM customers in Europe to protect the over-the-air privacy of their cellular voice and data communication. The best published attacks against it require between 240 and 245 steps. This level of security makes it vulnerable to hardware-based attacks by large organizations, but not to software-based attacks on multiple targets by hackers.
“In this paper we describe a new attack on A5/1, which is based on subtle flaws in the tap structure of the registers, their noninvertible clocking mechanism, and their frequent resets. The attack can find the key in less than a second on a single PC with 128 MB RAM and two 73 GB hard disks, by analysing the output of the A5/1 algorithm in the first two minutes of the conversation…”
Real Time Cryptanalysis of the Alleged A5/1 on a PC (preliminary draft) Alex Biryukov Adi Shamir
December 9, 1999
Authentication verifies the person’s (or machine’s) identity: that is, confirms that the entity seeking access is who or what it claims to be.
Authorization involves a set of rules that defines what the authenticated entity is able to do once access is granted.
Accounting produces a record of the resources used and the actions taken by the identity after access.
Computer abuse, which might involve the user in ‹an abuse of privilege›, is the active and intentional misuse of the system’s facilities (such as unauthorized access to, or alteration of data). However, the term can also be used to describe the proper use of a system for an improper purpose, such as fraud, denial of service, embezzlement, etcetera.
Abuse@ISPconcerned.com is increasingly being used as a standard address format for users to report e-mail abuse (often spam) to the service provider. In this sense, e-mail abuse can include junk mail, abusive or offensive content, malware and so on.
Acceptable Use Policy (AUP)
An AUP is a formal definition of the organization’s terms and conditions for the use of its equipment or services. It often refers specifically to Internet use. For example, a company’s Internet AUP would specify what is and what is not acceptable use of the Internet on company equipment and/or in company hours.
The AUP should be a written document, and ideally part of the terms of employment. It should be acknowledged by all employees, and breach of the AUP should be a disciplinary matter.
The interaction between a subject (usually a user, but could be a program or another IT device) and an object (such as a computer, a file or a device attached to the computer) that results in the flow of information from one to the other.
That’s the technical term. In reality, ‹access› is normally about a user getting on to a computer; that is, accessing the system and its files.
The prevention of unauthorized access to the resources of an IT product, programs, processes, systems, or other IT products.
Some suppliers consider preventing unauthorized users from logging on to the system to be access control. In reality, access control should also stop logged on users accessing objects (files, devices, etc) for which they have no authorization.
The ‹strength› of access control is often described in terms of ‹factors›. The greater the number of factors, the stronger the control.
five-factor = password + token + biometric + geography + user profiling
Only the first three are in common usage, but we can expect user profiling to play a greater part in the future.
See also: role based access control
Access Control List (ACL)
An ACL is a common mechanism used to specify Access Permissions. In its simplest form, it is a list for each managed object (files, devices, etc) in a system. The list specifies which subjects can perform which types of access to which object. If the subject’s ID (it could be a user or a device) is not included in the ACL, then that subject is not allowed to access the object
A set of permissions associated with every file and directory that determines who can read it, write to it, or execute it. Only the owner of the file (or the super-user, Administrator or God Account) can change these permissions.
Maintaining access permissions is one of the most burdensome – but nevertheless one of the most important – aspects of administering system security.
Collectively, ‹access rights› is synonymous with access permissions.
A security device that normally attaches to a COM port on a system which, when used in conjunction with appropriate software or hardware, allows authorized access to that system. Examples include smart cards and smart card readers, and touch memory devices such as those produced by Dallas Semiconductor.
Tokens are often described as the second factor in two-factor access control: something you own.
One of the fundamental requirements of information security, accountability is the property that enables activities on a system to be traced to specific entities; who or which may then be held responsible for their actions. It requires an authentication system (to identify Users) and an audit trail (to log activities against Users).
Accounting is the process of keeping track of a user’s activity while accessing the network resources, including the amount of time spent in the network, the services accessed while there and the amount of data transferred during the session. Accounting data is used for trend analysis, capacity planning, billing, auditing and cost allocation.
The accounting process provides valuable security information for incidence response and other security processes.
See also: Accountability
A formal process of approval. It is usually applied and required by bureaucracies and large organizations before the purchase of computer hardware and/or software. The accreditation process should be undertaken by specialist staff, and the hardware or software should be evaluated against a formal list of requirements and considerations. In many cases, the vendor itself, as well as the products under consideration, will need to be accredited.
Where a third party organization is used to undertake accreditation, it is known as an accreditation authority (see CLEF as an example of an accreditation authority). The individual responsible for accrediting a system is the accreditation agent.
In the wider sense, active content refers to data that includes script that makes something happen (as opposed to static content, such as simple text or images that just get displayed). In this sense active content could include HTML (since it can redirect the browser elsewhere, or load and run applications) and even animated GIFs (since the animation is something happening).
While active content can enrich the Internet experience, it brings with it a vast array of security dangers and problems. In reality, the attempt to control the problems through simply banning their use is very Luddite. Active content is here to stay – and increase. Security Managers must find active ways to combat the danger from active content.
“ActiveX is code which defines Microsoft’s interaction between Web servers, clients, add-ins and Microsoft Office applications. Basically, be afraid. Be very afraid. ActiveX applications can have full access to your system. In most instances this access will be quite legitimate, but a malicious ActiveX application is extremely worrying. The danger in malicious ActiveX code is when it has the capability to access and siphon data from your hard drive. A graphic illustration of this was given by the Chaos Club in Germany who demonstrated live on television a transfer of money from one bank account to another using an Internet-installed add-in – without the knowledge of the bank or the account holders.”
from Content Technologies› Guide to Content Security
Security that has been added to a system after it has been released and is operational; that is, security that has been retrofitted.
This is the norm for most commercial systems software.
Address Resolution Protocol
ARP – A protocol for mapping an Internet Protocol address (IP address) to a physical machine address that is recognized in the local network.
Administrator is the default account name for the main system management account in the NT family of operating systems – similar to the ROOT account in Unix systems. Attaining Administrator permissions is the target of NT crackers just as obtaining ROOT is the target for Unix crackers.
A person or persons with responsibility for managing the system(s) in use.
Asymmetric Digital Subscriber Line; one of a group of high speed digital subscriber telephone lines ideal for use with the Internet. Together, they are known as xDSL. A primary advantage is that they can use existing copper wire (that is, the existing telephone infrastructure).
ADSL is ‹asymmetric› because it offers different upload and download speeds (between 1.5Mbps and 8.5Mbps downstream; and 16 to 640kbps upstream). The primary performance limiter is the distance between the subscriber and the Central Office (telephone Exchange). For example, if the distance is less than 9000 feet, the download speed could be in excess of 8Mbps; but if the distance is 18,000 feet, then the download speed can only achieve 1.54Mbps.
A feature of ADSL is that it is ‹always on›. That is, dial-up Internet users will not need to go through a separate process of dialling into the Internet for each session. Similarly, incoming e-mails will come straight to the user’s computer without having to wait to be collected from the ISP’s mailbox.
But ‹always on› introduces new security threats to the dial up user: just as the Internet is always available to the PC, so the PC is always available to the Internet (and the threats it contains). It is important, therefore, that ADSL users install adequate security software: encryption to protect sensitive files; solid anti-virus tools to avoid virus infection and aid recovery if necessary; and a personal firewall to help keep hackers and crackers at bay.
Adware is software that carries advertising. The software is usually free provided that the user agrees to accept the receipt of advertisements (either in the form of a banner within the application, or as separate pop-up Windows). There is nothing wrong with this arrangement provided everything is openly and clearly agreed between all parties concerned.
Adware becomes a concern when it starts to incorporate elements of spyware; that is, it starts to send information about the user (such as the user’s Internet browsing habits) back to the originator (supposedly so that more relevant advertisements can be directed to the user). While this is definitely a privacy issue, it can also be a security issue (if, for example, the information gleaned could be used to blackmail the user).
Other concerns about adware are that it consumes CPU power and corporate bandwidth, and often uses the user’s own hard drive to store the advertisements that will continually pop-up.
Just as there are specialist products to tackle the threat of viruses, so there are specialist products to tackle the threat of adware. If everything that the adware does is open and agreed, it is perfectly legitimate. If any aspect of what it does is hidden or disguised, then it is effectively a Trojan Horse.
AES (Advanced Encryption Standard)
AES is a FIPS publication being developed by NIST. It is intended to specify an unclassified, publicly-disclosed, symmetric encryption algorithm, available royalty-free worldwide. The route chosen has been a ‹competition› between a number of proposed algorithms.
In the late Summer of 2000, RIJNDAEL was chosen to be the new AES.
DES, the long-standing US encryption standard, has been cracked. The US Government is now going through a formal process to replace DES. This effort, called the Advanced Encryption Standard, will take several years to decide on a final algorithm, and more years for it to be proven out in actual use, and carefully scrutinized by public cryptanalysts for hidden weaknesses. If you are designing products to appear five to ten years from now, the AES might be a good source of an encryption algorithm for you. In the meantime, triple-DES might be a better solution than single DES.
It is important not to underestimated the importance of AES to the future of information security. The original official AES candidate list of 15 follows:
Entrust Technologies, Inc. (represented by Carlisle Adams)
Future Systems, Inc. (represented by Chae Hoon Lim)
Richard Outerbridge, Lars Knudsen
CNRS – Centre National pour la Recherche Scientifique – Ecole Normale Superieure (represented by Serge Vaudenay)
NTT – Nippon Telegraph and Telephone Corporation (represented by Masayuki Kanda)
TecApro Internacional S.A. (represented by Dianelos Georgoudis)
Lawrie Brown, Josef Pieprzyk, Jennifer Seberry Magenta Deutsche Telekom AG (represented by Dr. Klaus Huber)
IBM (represented by Nevenko Zunic)
RSA Laboratories (represented by Matthew Robshaw)
Joan Daemen, Vincent Rijmen
Cylink Corporation (represented by Dr. Lily Chen)
Ross Anderson, Eli Biham, Lars Knudsen
Bruce Schneier, John Kelsey, Doug Whiting, David Wagner, Chris Hall, Niels Ferguson
From this list, five finalists were chosen:
The ‹winner› was Rijndael.
Air gap is a technology that serves a similar purpose to the firewall: an obstacle between a trusted and an untrusted network. The concept is, however, somewhat different.
In the physical world, a firewall is a barrier between the fire and the safe area designed (but not guaranteed) to impede the passage of the fire. This is exactly how a computer firewall works.
Air gap does not place an obstacle between the two networks — it seeks to place a ‹gap› over which the ‹fire› cannot pass. In this sense, air gap technology would be better described as a ‹firebreak›.
A wireless LAN tool that can recover encryption keys. It passively monitors transmissions and computes the encryption key when enough packets have been gathered.
The terms ‹alarm› and ‹alert› are similar, but tend to be used in slightly different circumstances. Within information security, there is a tendency to use ‹alarm› as a noun, and ‹alert› as a verb.
Alarms are raised automatically by systems that detect an anomalous situation. Thus, an intrusion detection system (IDS) that detects a possible attack situation would raise an alarm.
Alerts are warnings sent by users to other users. Thus the discoverer of a vulnerability might issue an alert to warn other users. However, you could say that in raising an alarm, the IDS will alert the operator.
A set of instructions, especially ones that can be implemented on a computer, for a procedure that can manipulate data. Cryptographic algorithms are used to encrypt sensitive data files, to encrypt and decrypt messages, and to digitally sign documents.
Received opinion demands that a cryptographic algorithm should be made available for peer review. Proprietary algorithms that remain secret cannot be trusted.
Alias / Handle
An alternative name used by people (or bestowed upon objects) to hide their true identity. Many Internet users have a genuine and valid reason to disguise their identity. Others, including hackers, crackers and script kiddies, simply prefer to avoid recognition and detection. In both cases they are likely to use an alias.
The hacker/cracker fraternity tend to use the term ‹handle› (adopted from Citizen’s Band Radio usage) rather than ‹alias›. Here it has the added intention of being a ‹nom de guerre›. Script kiddies may well choose a particular handle in an attempt to engender fear in the minds of the average law abiding Internet user…
Alice and Bob
The mythical characters conventionally used as the dramatis personae in cryptography explanations. There is no relevance to these particular names other than, perhaps, it is a politically correct combination (one of each sex in strict alphabetic sequence).
All Party Internet Group – APIG
“The All Party Parliamentary Internet Group exists to provide a discussion forum between new media industries and Parliamentarians for the mutual benefit of both parties. Accordingly, the group considers Internet issues as they affect society informing current Parliamentary debate through meetings, informal
receptions and reports. The group is open to all Parliamentarians in both the House of Commons and House of Lords.”
‹Always-on› is a term often used to describe a broadband connection to the Internet, characterized by a constant connection between the user’s computer and the ISP. This is becoming increasingly popular with home users using DSL technologies (see ADSL).
Users with an always-on connection are at a greater security risk than traditional ‹dial-up› connections for two particular reasons:
<ol> <li>the permanent connection gives remote hackers a greater period of time to try to compromise the system via the Internet
</li> <li> always-on connections generally have a constant IP address (rather than have a different one assigned by the ISP each time the user logs on to the Internet.</li></ol>
Ambient data is a term first used by the computer forensics company NTI in 1996 and now in widespread use to describe data stored in non-traditional areas and formats – such as the Windows swap file.
Anonymity is defined in RFC2828 as “The condition of having a name that is unknown or concealed”. Roger Clarke defines an anonymous record or transaction as “one whose data cannot be associated with a particular individual, either from the data itself, or by combining the transaction with other data”.
Perceptions of the value of anonymity differ in different parts of the world. Many citizens of the United States consider the right to anonymity to be a fundamental aspect of personal freedom. Many Europeans consider anonymity to be little more than a cloak for wrongdoing. Nevertheless, even the Europeans recognize the value of anonymity by offering it to potential whistle-blowers.
Hackers and crackers often seek a degree of anonymity through the use of aliases – although within their own circles everybody seems to know everybody else.
Maintaining anonymity on the Internet is becoming increasingly difficult with increasing government regulation – which gives rise to one of those strange contortions that arise where governments want it both ways. In Europe, the use of Echelon, the RIP Act in the UK and many other laws in different countries destroys personal privacy and anonymity, while the Human Rights Act and the Data Protection Directive seek to protect privacy (and indirectly anonymity) by placing restrictions on the storage and use of personally identifiable information.
A public FTP file archive that any Internet user can access. The term “anonymous“ refers to the generic account that is used by anyone to log into an FTP server. Anonymous FTP does not require the user to input a password in order to gain access.
Anti-Spyware products are designed to locate and remove spyware from your computer.
Since ‹spyware› itself is a fairly loose term, anti-spyware products are often also effective against Trojans and worms. They are not generally very effective against viruses. Good PC security thus requires both anti-spyware and anti-virus defences.
Anti-virus (AV) Tools
Any hardware or software designed to stop viruses, eliminate viruses, and/or recover data affected by viruses. More usually, AV tools refer to software systems deployed at the desktop or on the server to eliminate viruses, worms, trojans, and some malicious applets. While the importance and value of AV tools cannot be underestimated, they should be used as part of a security policy. AV tools can miss:
* brand new viruses
* stealth viruses that are introduced in a way that bypasses the AV tool
* new methods of propagation (such as an html e-mail that points to a remote URL that downloads a DOC with an embedded XLS table containing a macro virus – which was once a new method of propagation…)
It is important to note that there is a continuous leap frog battle between the virus authors and the anti-virus vendors – and for this reason alone it is necessary to keep up to date with the latest version of your chosen AV software.
Note: the University of Hamburg’s Virus Testing Center carries out independent testing of all available virus scanning engines every six months, testing effectiveness against ‹zoo‹, ‹in the wild‹ and macro viruses. Test results are available for anonymous FTP download from this site.
See also behavior blocking.
AntiSniff is able to remotely detect whether a computer is packet sniffing or in promiscuous mode. If it is, either the system administrator has accidentally left it in this state – or you have an intruder.
AntiSniff works by running a number of non-intrusive tests, in a variety of fashions, which determine whether or not a remote computer is listening in on all network communications.
In the UK an Anton Pillar order is a court order that provides the right to search premises without prior warning. It is designed to prevent the destruction of incriminating evidence, such as the illegal use of copyrighted material.
Commonly, any miniature application, especially as an enhancement to a web page involving the embedding a foreign type of program in the page. More particularly, an applet is a special Java application that runs inside the sandbox.
Application level gateway
Application level gateways (also known as application layer firewalls and application proxy firewalls) are often described as third generation firewalls. They are software applications with two primary modes:
* proxy server
* proxy client
When a user on the trusted networks wishes to connect to a service on the untrusted network (say, Internet), the application is directed to the proxy server on the firewall. The proxy server effectively pretends to be the real server on the Internet. It evaluates the request and decides to permit or deny the request based on a set of rules that are managed for the individual network service.
If the request is approved, the server passes the request to the proxy client, which contacts the real server on the Internet. Connections from the Internet are made to the proxy client, which then passes them on to the proxy server for distribution to the real client. In this way, inbound connections are always made with the proxy client, while outbound connections are always made with the proxy server. There is no direct connection between the trusted and untrusted networks.
<ul> <li>can understand and enforce high-level protocols, such as HTTP and FTP </li> <li> maintain information about the communications passing through the firewall server: partial communication-derived state information, full application-derived state information, and partial session information</li> <li> can be used to deny access to certain network services, while permitting access to others</li> <li>able to process and manipulate packet data</li> <li>prevent direct communication between external servers and internal computers</li> <li>give users the appearance that they are communicating directly with external servers</li> <li> can route internal services, as well as external-to-internal requests, elsewhere (for example, they can route services to an HTTP server on another computer)</li> <li> can provide value-added features, such as HTTP object caching, URL filtering, and user authentication</li> <li> are good at generating audit records </li> </ul>
<ul> <li> require you to replace the native network stack on the firewall server </li> <li> you cannot run network servers on the firewall server</li> <li> since inbound data has to be processed twice, proxy servers introduce performance delays </li> <li> a new proxy must be written for each protocol that is to pass through the firewall, often causing delays between the existence of a new network service and the availability of that service to the users</li> <li> cannot provide proxies for UDP, RPC, and other services from common protocol families</li> <li> often require modifications to clients or client procedures, thus complicating the configuration process</li> <li> vulnerable to operating-system and application-level bugs. This means that unless the proxies run on their own secure operating system, general purpose OSs such as NT will need to be ‹hardened‹. </li> <li> If the network stack is not performing correctly (difficult to validate), then some of the information used to perform security checks could return incorrect information. </li> <li> may require additional passwords or other validation procedures.</li> </ul>
Evolution of the application level gateway (from comments on the Internet posted by Marcus Ranum)
“Brian Reid and a couple of folks at DEC had a corporate gateway, called gatekeeper.dec.com. Paul Vixie took over operating it and was providing services to a growing list of folks inside the company – they’d telnet in and FTP out, or whatever. I worked in one of DEC’s sales support units, for Fred Avolio, and we had an Internet connection (9600 baud!!) via an aging MicroVAXII and Fred told me to clean it up some and «make it look like what Paul has in gatekeeper» I think I have Fred’s original napkin drawing in my archives someplace. I keep meaning to look for it so I can frame it for him. 🙂 Gatekeeper in those days was what we’d now call a gateway host and there was a screening router built on another MicroVAX running an early Mogul screend. So I built something like that. But I didn’t want to give people accounts on it, so one Xmas break I wrote an FTP proxy in a fit of hacking. And it worked pretty well. So instead of giving out accounts like Paul did, I started giving people access via proxies. That worked real well. Then one of our sales guys, in a fit of enthusiasm, sold “a firewall like decuac” to a REALLY huge customer and I wound up cloning the system onto a couple of DECstations and that was, I believe, the first commercial Internet firewall. Then I had to write the documentation for the bloody thing, and so it needed a name, so we stole «SEAL» which the guys in Palo Alto had been talking about for a firewall product but what the heck, we’d already sold something. 🙂 The next best bet for the name, was «PIG» for «Packaged Internet Gateway» but that, as it were, didn’t fly. From that one customer, once the documentation had been written, sales took over and we got a little busy with firewalls from then on. 🙂“
An archive is a collection of data that is stored for a relatively long period. Archiving is the act of placing data into an archive.
Archives are necessary to support audit trails and can be used to provide ‹forensic‹ evidence if needed.
“The ability to automatically archive e-mail can help an organization in e-mail liability cases, and prevent reliance on users to save business-critical e-mails.
If your company’s policy is to destroy e-mails after six months, they could not be retrieved for court purposes. In general however, it is recommended that e-mails are stored for a defined period only. To be effective, therefore, archiving needs to be well managed.”
from Content Technologies› Guide to Content Security
ARPA (Advanced Research Projects Agency)
Or DARPA (Defense Advanced Research Projects Agency). The name has changed several times – but it’s the same thing.
An agency within the US Department of Defense that distributes funds for defense related research projects. ARPA provided the initial funding for the development of platform independent wide area internetworks, which evolved into (D)ARPAnet, which in turn developed into the Internet.
The name by which the Internet was originally known. It was developed by the US DoD in the late 60s and early 70s as an experiment in Wide Area Networking that could hopefully become so redundant, so robust, and so able to operate without centralized control, that it could survive nuclear war from without, and terrorist acts from within.
ARPAnet came into being in 1969. Its value to scientific research grew, and in 1983 it was taken over by the US National Science Foundation (NSF). Since then it has simply grown and evolved as if it has a life of its own – and in April 1995 NSF ceased involvment.
ARPAnet started with just four hosts. Twenty-five years later (1984) there were 1000 hosts. In the next ten years as ARPAnet became the Internet, it leapt to more than 1m hosts – and before the end of the century it was more than 10m. And the figures continue to grow.
It may be, with the advent of xDSL technologies, that the ultimate end for ARPAnet is a single Internet on which every single PC, home or business, is a potential host.
American Standard Code for Information Interchange
ASCII is the 7-bit computer code that specifies the characters of the alphabet and the basic punctuation we see on the screen. Generally speaking, ASCII files are considered to be relatively safe, but…
“Basic ASCII allows only 7 bits per character (128 characters), and the first 32 characters are “unprintable” (they issue commands such as Line Feed, Form Feed and Bell). Generally, ASCII files are text files. However, with a little effort, it is possible to write programs that consist only of printable characters (see EICAR). Also, Windows batch (BAT) files and Visual Basic Script (see VBS) files are typically pure text, and yet are programs. So, it is possible for ASCII files to contain program code, and thus to contain viruses.
When sending out emails, especially those intended for a wide audience, using simple ASCII text to get your message across is the best choice. A pure-text email lets you control both content and layout exactly, and ensures that your mail will be legible by users of even the most old-fashioned email programs.”
from Sophos› V-Files
ASCII files are sometimes called ‹text› files – and so they are. However, when we talk of a text file (.TXT) we specifically mean a file containing only ASCII characters, and specifically not containing any formatting or instruction codes. HTML is an example of the latter. It uses only ASCII characters, but contains formatting and instructions. The instructions in particular can pose a security threat (for example, sending the browser to a different location and downloading malicious code).
So if we wish to be safe, we should specify ‹text only› rather than just ‹ASCII›.
Grounds for confidence that the other four security goals (integrity, availability, confidentiality, and accountability) have been adequately met by a specific implementation. “Adequately met” includes:
* functionality that performs correctly,
* sufficient protection against unintentional errors (by users or software), and
* sufficient resistance to intentional penetration or by-pass.
Confidence that the security implementation accurately reflects and imposes the security policy.
Public Key Cryptography is also known as asymmetric cryptography. Symmetric cryptography uses the same key for both encryption and decryption. Asymmetric cryptography uses different (but related) keys for encryption and decryption. It is also called public key cryptography because the encryption key is made public (usually in an LDAP directory) while the decryption key is kept private.
Whitfield Diffie and Martin Hellman introduced the world to the concept of ‹public key› or ‹asymmetric› cryptography in their seminal 1976 paper “New Directions in Cryptography”, IEEE Transactions on Information Theory, November 1976, pp. 644-654
“Given a system of this kind, the problem of key distribution is vastly simplified. Each user generates a pair of inverse transformations, E and D, at his terminal. The deciphering transformation, D, must be kept secret but need never be communicated on any channel. The enciphering key, E, can be made public by placing it in a public directory along with the user’s name and address. Anyone can then encrypt messages and send them to the user, but no one else can decipher messages intended for him.”
It has more recently emerged, and is generally held to be probable, that James Ellis had already ‹discovered› the concept while working for the UK’s CESG. This was announced by CESG’s publication of the paper “The Story of Non-Secret Encryption”, written by Ellis in 1987, and released posthumously in 1997.
“…it is now appropriate to tell the story of the invention and development within CESG of non-secret encryption (NSE) which was our original name for what is now called PKC. The task of writing this paper has devolved on me because NSE was my idea and I can therefore describe these early developments from personal experience…”
It is important to bear in mind that the concept of PKC is different to the Diffie-Hellman Key Exchange that was introduced in Diffie and Hellman’s paper, and from the subsequent RSA implementation of that concept.
An ‹attachment› is a file that is attached to an e-mail in order to be sent from one location to another. It is a means for sending binary files over the Internet.
Attachments are one of the primary means for spreading viruses and distributing Trojan Horses. For this reason it is unwise to accept unexpected attachments from unknown senders. Indeed, many companies have a strict security policy that automatically quarantines or deletes attachments.
An attempt to subvert or bypass a system’s security, which may or may not be successful. Attacks always involve an action driven by intelligence.
Attacks may be active or passive. An active attack attempts to alter or destroy data. A passive attack attempts to intercept and read data without altering it. They may also be ‹insider attacks› (that is, insigated by somebody who may legitimately be able to access the system), or ‹outsider attacks› (that is, instigated by somebody from outside of the system who has no legitimate right to access it).
By far the majority of security incidents stem from insider activity. However, increasing use of the Internet and increasing hacker activity is beginning to change this equation.
Attempts to break encryption (that is, unauthorized attempts to decipher encrypted data) are also known as ‹attacks›.
Brute Force attack
Denial of Service
NFS and NIS attacks
* The process of compiling a list of all security relevant events. This should include the user causing the event. The list itself is called the ‹audit trail‹ and is essential and invaluable in ensuring accountability.
* The process of compiling a list of all software and/or hardware installed on one or more PCs. Once the audit has been performed, the data can be analyzed to check for unauthorized software, missing hardware components, etc. This could be important to ensure that you are not inadvertently practising software piracy.
Audit Trail (Audit Log)
The Audit Trail is a record of all events that take place on a system, and across a network. It should provide a trace of user actions so that security events can be related to the actions of a specific individual, and is therefore the basis of the accountability requirement for a secure system.
* User/process/device authentication
A secure system requires that all users must identify themselves before they can perform any other system action. Authentication is the process of establishing the validity of the user attempting to gain access, and is thus a basic component of access control.
The primary methods of user authentication are:
* access passwords (something the user knows)
* access tokens (something the user owns)
* biometrics (something the user is, such as a fingerprint, palm print or voice print)
* geography (such as a particular workstation)
* user profiling (such as expected or acceptable behavior)
* Data authentication
Verification that the integrity of data has not been compromised.
Microsoft’s cabinet file signing technology to provide protection against dangers such as ActiveX misuse. Authenticode comes into play when applications are signed by the authors and placed in cabinet (CAB) files. These are then check-summed as to their contents and the checksum is then related to the author’s signature. This allows the recipient to be sure that the files were created by the author, and that the contents of the cabinet files have not been tampered with in transit. As the file completes downloading (or you start to install), the user is presented with a copy of the certificate and asked to press cancel or OK.
Basically, the user places his trust in the authors and the accompanying certificate. This system can be subverted by the hacker creating his own certificate and using a name that at first glance looks like a legitimate company, such as ‹MickySoft›.
In the Chaos Club instance (see ActiveX for further details), an expired certificate was used. Even so, it would have been simpler if they had used no certificate at all, since it is not mandatory for ActiveX applications to be delivered in cabinet files. ActiveX constitutes a real content security threat.
from Content Technologies› Guide to Content Security
See also: ActiveX
Authorization is permission for a subject to access an object. In practice, only authorized users should be allowed access to specific data or resources. The details of who is authorized to do what is usually maintained in special access control lists (ACLs).
One of the four fundamental requirements of information security, availability is the requirement that information and/or services be available to an authorized user on demand. The maintenance of availability is one of the prime functions of a security system.
Attacks against the availability of an information resource are called denial of service (DoS) attacks.
See Security Awareness.
Back Door (Trap door)
Synonymous with trap door. A way into a software system that the programmer or administrator of that system (or a cracker who has gained access) has deliberately left for himself. A typical back door will allow its designer access to the system without checking the file of authorized users.
Back doors are often designed into systems to help debugging processes during the design phase – and can sometimes be left in accidentally. Sometimes this is intentional to allow field engineeers to maintain the system. Conspiracy theorists often claim that some major operating systems have secret back doors included at the behest of the nation’s security services. There is evidence to suggest that this has happened in the past.
See also: NSAKEY
Back Orifice (BO) is a program developed and released by The Cult of the Dead Cow (cDc). It is not a virus. It is often described as a Trojan (which it may be). But it is definitely a remote administration tool (RAT) with great potential for malicious misuse.
While BO could have some genuinely useful features (such as remote support and administration), we would not recommend that anybody install this program. If installed surreptitiously by someone with malicious intent, it has the ability to give a remote attacker full ‹sysadmin› privileges to your system. It can also ‹sniff› passwords and confidential data, and quietly mail them to the remote site.
There are two major concerns about BO. It is very easy to use (thus allowing ‹wannabees‹ with no great technical ability to become practising hackers); and it is extensible. The latter problem means that it will grow and ‹improve› over time.
Most major AV and intrusion detection systems can detect and remove BO; but there is likely to be a leap-frog approach as new variants and versions become available to the hacking community. See also NetBus and SubSeven.
It has been proposed that there be a distinction between ‹back up›, and ‹backup› (RFC2828).
Back up: verb; the process of copying one or more files to a separate location as a precaution against accidental or deliberate damage to the originals.
Backup: noun; the copy of one or more files stored in a separate location as a precaution against accidental or deliberate damage to the originals. A backup is not considered secure unless it is stored away from the original.
Thus a backup stored on magnetic tape (the traditional method) kept in a separate room or preferably building is considered more secure than backing up to a separate hard drive on the same computer.
A full backup copies the entire system; an incremental back copies only those files that have changed since the last full backup; a selective backup copies only selected files.
There are now several companies that offer ‹online› incremental or selective backup; that is, important files are copied across the Internet to a secure backup on a remote computer.
The ability to recover backed up data is equally important.
Ban on Spam
Ban on Spam is the nickname given to the EU’s anti-spam directive. “It sets out specific conditions for installing so-called “cookies” on users› personal computers and for using location data generated by mobile phones. Notably, the Directive also introduces a ‹ban on spam› throughout the EU.”
Directive 2002/58/EC on Privacy and Electronic Communications.
A term that describes the amount of information that can be passed through a communications channel in a given amount of time; that is, the capacity of the channel. The bandwidth is usually expressed in ‹bits per second›.
Thus, a T1 line has a bandwidth of 1.544 Mbps; a 56k baud modem has a nominal bandwidth of 0.056 Mbps.
A particularly strrongly protected computer that is often used as part of a firewall. It needs to be strongly defended (and hence its name) because it is directly exposed to the Internet and therefore more vulnerable to attack.
Filtering routers in a firewall usually restrict outside traffic to just one host – the Bastion. Since this is the only host that can be reached, it allows defense to be concentrated here, maximising efficiency and reducing cost.
Bayesian filtering is an analysis technique that is used within information security to recognise spam automatically. Bayesian filters use ‹Bayes Formula›, which is an algorithmic methodology for combining the probability of multiple events into a single number. The technique is used to deliver a ‹spam probability› based on the occurence of different words or phrases within a single email.
Unlike the more simplistic ‹blacklist› spam filters (which generate a spam ‹score› based on the occurrence of differently weighted keywords, all generally taken out of context), Bayesian filters can accommodate ‹good› words or phrases as well as ‹bad› ones. The system ‹learns› to differentiate genuine email from spam by examining the words and punctuation in large samples of both types of messages. It selects a set of words and numbers (‹tokens›) from the text and compares their ratio between good mail and spam. Using the tokens, the Bayesian approach looks at new mail and calculates the probability that the message is spam. One advantage, of course, is that the spam filtering becomes attuned to the email of each individual user.
Bulletin Board System. A stand-alone computer, connected to one or more modems and telephone lines, running software which allows callers to connect to the system and send or receive messages, files etc. Bulletin boards effectively pre-date the Internet; and their use is now in decline.A bulletin board can contain directories of files (for user downloading) and e-mail facilities (where users can exchange/or post messages). Depending upon access privileges, users can read, download, upload, and even modify stored files.
Bulletin board software is required to allow the bulletin board owner to place limits on access; for example, not all BBS users should be allowed to modify files that are stored on the BBS.
Users can dial into the PC to leave messages for other people and to transfer files. Some bulletin boards are run by hobbyists for personal use, while some are run by companies in order to provide information to customers. Although the software that controls access to the bulletin board is quite sophisticated, it is still possible for hackers to break out of the confines imposed by the controlling software and to gain access to the entire hard disk on the PC being called. Therefore, never keep any confidential data on a computer that is being used as a bulletin board, or on a computer that is linked by modem or network to the computer that is running the bulletin board.
Also known as ‹sandboxing‹, behavior blocking software monitors the executable actions of potentially malicious software and stops dangerous operations from taking place (such as deleting files, modifying system settings and so on).
Behavior blocking programs are often considered to be more effective than virus scanners in blocking malicious code because they monitor actual functions rather than look for a known signature. In order for a traditional virus scanner to detect a virus, it has to have the actual signature, or fingerprint, of the virus within its database. New viruses often succeed because they are not immediately recognised simply because their signatures are not yet held in the database. Behavior blocking doesn’t care whether it’s a new virus, an old virus or something completely different – it simply stops it harming the system.
Bell-LaPadula Security Model
A formal model of security policy that describes a set of access control rules. By conforming to a set of rules, the model inductively proves that the system is secure. A subject’s (usually a user’s) access to an object (usually a file) is allowed or disallowed by comparing the object’s security classification with the subject’s security clearance. The three basic rules are the *-property (star property), the simple property, and the tranquility property.
Tim Berners-Lee, an Englishman, is regarded as the father of the world wide web; he is in fact its inventor. (Note that the World Wide Web is NOT the Internet. Although the terms are increasingly used synonymously, the Web is a technology designed for file transfer across heterogenous networks. The Internet is merely the biggest single network on which the Web operates.)
In the late 1980s, working at CERN (the European Laboratory for Particle Physics) in Switzerland, the computer services group were looking for a way to facilitate access to information. Berners-Lee developed a methodology that would later evolve into the world wide web. It was a combination of existing software systems: the Internet, hypertext, and a client/server model.
A formal security model for the integrity of subjects and objects in a system, somewhat similar to Bell LaPadula. A lattice of integrity levels is used to express integrity policies that refer to the corruption of clean higher level entities by dirty lower level entities. Information may only flow downwards.
(BINary HEXadecimal) – A method for converting non-text files (non-ASCII) into ASCII. This is needed because Internet e-mail can only handle ASCII.
“A method of encoding binary (or any other non-printable character data set) for transmission in a text-only media. Generally found in the Apple Mac world, but also one of the encoding options found in the Eudora email package.”
from Content Technologies› Guide to Content Security, 2nd Ed
A unique and measurable characteristic of a human being used to identify an individual. An increasingly important method of authentication sometimes known as the ‹third level of authentication‹:
* passwords (something you know)
* tokens (something you own)
* biometrics (something you are)
* geography (the system or location you are using)
The primary areas of biometric technology are:
Other areas of research include body odours (the chemical pattern created by human body smell), ear shape (the shape and structure of the ear), and keystroke dynamics (individual human typing rhythms and characteristics).
A key characteristic of a biometric access system is that it must operate in real time. An example could be a fingerprint scanner, which scans the fingerprint and compares the results, instantly, to a stored database of acceptable fingerprints. This should be contrasted to DNA testing. DNA is a unique biometric. However, real time analysis of DNA is not yet possible. Until this is feasible, there can be no DNA-based biometric access system.
In the meantime it appears likely that fingerprints will be used as biometric authentication on smart card based national ID cards.
The birthday problem is a traditional elementary probability and statistics problem. The solution leads us to the birthday paradox, which has a specific relevance within information security.
The problem, very much simplified, is sometimes expressed thus: ‹How many people do you need to be present in a room before the odds are good (greater than 50%) that at least two of them share a birthday?›
This is not quite the same as saying ‹What is the likelihood that someone else has the same birthday as you?› – but it sounds very similar. For this latter question, we can express the probability as
This shows that there will need to be at least 253 people for the probability to be more than 0.5; that is, for the likelihood that someone else has the same birthday as you. This sounds ‹reasonable›. However, if we go back to the original question and be precise, the math is a little more complex. It could, however, be expressed thus:
If we evaluate this formula, we get a much smaller figure; that is, there only needs to be 23 people before there is a likelihood that two of them share the same birthday. This is much fewer than most people would assume without doing the math – it seems ‹unreasonable›; and that is the paradox.
This is a bit confusing, but in the first scenario, we are looking for someone with a birthday that is the same as our own. In the second scenario, we are looking for any two people in a room of n people sharing the same birthday. It is more likely that we will find two people who share a birthday than we will find another person sharing our own birthday; hence, the birthday paradox.
We can explain it in more detail thus… If there is one person in the room, then the chances of them NOT sharing the same birthday as anyone else is precisely 365/365 (or 1) as there are no other people anyway.
If there are two people in the room, then the chances of them NOT sharing the same birthday is pretty high, i.e. 364/365, and when multiplied by the probability of the other people in the room not sharing the same birthday as anyone else you get the overall chances of all the people in the room not sharing the same birthday (365/365 x 364/365).
Add another person and there are already two people in the room (i.e. two of the possible dates in the year have gone), so the chance of the third person having a date which doesn’t match the other two is reduced to 363/365, and the chance of there being no match between any of the three is reduced to 365/365 x 364/365 x 363/365.
A fourth person and the odds become: 365/365 x 364/365 x 363/365 x 362/365 of there being no matches…
…and so on until we reach our magic 23 people – when the probability of there NOT being any two people in the room sharing the same birthday becomes 0.492702766. The probability of there being a match is 1 minus 0.49 = just over 50%.
Although this is a math problem, it has a distinct relevance to information security – more specifically, to the security of digital signatures. Consider the following example of how to exploit the birthday paradox:
Bob sends transfer orders to his bank via electronic messages. Bob does not encrypt his messages but he protects them by digital signatures. Bob computes a digital signature by encrypting the hash total of his message.
Alice wants to modify the payee’s name in one of Bob’s messages. She is not able to imitate Bob’s signature. How can Alice modify Bob’s message without being noticed?
Alice’s method 1 (the inefficient one)
Alice intercepts a message from Bob; she computes the hash total; she then manipulates the message by modifying the payee’s name. Alice then generates many variants of the manipulated message. Variants have the same look as the message, but with various “invisible” differences (e.g., one space more at the end of a line, or two spaces, etc.) Finally, Alice computes the hash total of all variants, till she finds one that has the same hash total as the message Bob signed. Now, Alice is able to send the variant with Bob’s signature: the bank will accept it
Alice’s method 2 (exploiting the birthday paradox)
Instead of intercepting one message, Alice intercepts several messages and she acts on each message as explained above: she computes the hash totals of all messages; she manipulates each message and generates variants of all manipulated messages; she hashes all variants till a variant’s hash matches the hash of an intercepted message
Comparison of methods
Suppose that a hash total is 16 bits. There are 65536 possible hash values.
When Alice uses method 1, she needs to generate more than 32000 variants to have an even chance of matching the original hash. When she uses method 2, she has a fair chance of finding a match if she intercepts 16 messages and if she generates 16 variants for each intercepted message: this gives a total of 256 variants (2**16/2), which is much less than required by method 1.
Bruce Schneier (Applied Cryptography, ISBN 0-471-12845-7) describes the effect thus: «Assume that a one-way hash function is secure and the best way to attack it is by using brute force. It produces an m-bit output. Finding a message that hashes to a given hash value would require hashing 2<sup>m</sup> random messages. Finding two messages that hash to the same value would only require hashing 2<sup>m/2</sup> random messages. A machine that hashes a million messages per second would take 600,000 years to find a second message that matched a given 64-bit hash. The same machine could find a pair of messages that hashed to the same value in about an hour.»
(…or, to be precise, the actual numbers from these details are 584,942 years and 1.19 hours respectively.)
The following pages describe the paradox and its math in greater detail:
Black Hats/White Hats
There is no standard definition for Black Hats and White Hats – but the meaning is fairly clear. Good cowboys always wear white hats; bad cowboys always wear black hats.
So, White Hats are those on the side of law and order and secure computers. Black Hats are the forces of chaos and disorder. You could say that hackers are White Hats, crackers are Black Hats, and script kiddies don’t qualify for any hats.
But don’t take the description too seriously. It’s not a hard and fast rule. The terms grew as a humorous reference – and there’s even a new term just beginning to take vogue: the Grey Hats. Rather than ask somebody whether he’s a Black Hat or a White Hat, you might be better off asking “Which Hat are you wearing today?”
A black hole is the place where e-mail goes when it is neither received nor bounced. It is a neat analogy when we’re talking about cyberspace. There is a greater likelihood of truth in an e-mail being lost in a black hole than there is of your check being lost in the post.
The term is now also used to describe a list of known spammers. Third parties maintain these lists and make them available to corporations and ISPs. The recipients can then decide whether or not to wholesale block transmissions from these sources – effectively consigning the transmissions to the earlier definition of the black hole.
This is a form of self-regulation, where the Internet is effectively regulating itself. Nevertheless, it has to be said that there are certain moral difficulties when a third party has the power to censor the content of messages he has never seen from sources he probably doesn’t know to destinations he almost certainly isn’t aware of…
The feature in many spam and IP filtering systems that allows the user or administrator to compile a list of addresses that will be disallowed. For example, an anti-spam blacklist will be a list of IP addresses from which mail will be blocked; a web filtering blacklist will be a list of websites that the user cannot access. The term also applies to the list itself.
see also: whitelist
A ‹block› is an amount of data treated as a single unit. A block cipher is a cipher that requires the accumulation of a complete block before transformation can be completed.
DES, for example, is a 64-bit block cipher.
Compare to stream cipher.
Blowfish is a 64-bit block cipher developed by Bruce Schneier and released in 1994. It has a variable key length of between 32 and 448 bits, and is thus resistant to brute force attacks.
Written by Bruce Schneier as a free replacement for DES or IDEA, it is considered very fast and secure. It is considered to be a strong algorithm, and was developed to provide a free unpatented alternative that is stronger and faster than DES and even 3DES, and stronger and cheaper than IDEA. Implementations in C are in the public domain.
Blowfish has greater memory requirements than many other algorithms, and is consequently not an attractive option for small devices such as smartcards. However, when implemented in software it is approximately five times the speed of 3DES.
Bluejacking is the term used to describe the process of sending a message from one Bluetooth enabled mobile phone to another local Bluetooth enabled phone. This makes location-based marketing a realistic possibility – users walking passed the front of a shop could receive a message detailing current special offers, or delivering money-off vouchers. But is this spam? Many users would consider it so; and it is unlikely that it would be legal in any of the countries with anti-spam legislation. Unless, of course, the user specifically opts-in to the ‹service›.
Bluetooth is a specification for short distance wireless communication between devices. At the time of writing, it is very much in its infancy. However, there is already little doubt that it will have a major effect on both office and home life.
One of the inevitable uses will be for the automatic authentication of users. Thus, if one device is in the signet ring of a user and another is in his or her PC, the ‹ring› can be used to effect automatic log on at the PC as the user enters the office. Note, however, that this is actually device, not user, authentication – it is the ring that is being authenticated, not the wearer.
A bitmap is a grid of coloured dots (pixels) which combine to make an image. BMP files record such images for display on a computer screen. Unlike other bitmap storage formats (see JPEG), BMP files are fairly primitive, and make only a modest attempt at compression. This means that large bitmaps tend to result in large BMP files. Nevertheless, BMP files are common on Windows systems because BMP is one of the standard Windows image formats. BMPs are not programs and cannot be infected with a virus.
from Sophos› V-Files
But see also Steganography.
E-mail that does not get delivered to its address and is returned to the sender has ‹bounced›. (See also black hole.) The term is probably by analogy with a ‹bounced› check, which gets returned to the sender.
There are many reasons for a bounce. The recipient e-mail address could be mis-spelt or non-existent. The recipient server could be down. The sender’s IP could be blacklisted as a spammer, or the receiver’s Firewall could be set to reject the sender’s domain or IP. Or the recipient’s content security system could detect something it objects to and blocks (see Scunthorpe Test). In the latter case, mail is more often quarantined for further inspection than simply bounced.
The irony is that a broken long key crypto system can still be stronger than an unbroken short key crypto system.
A browser is the software application used to locate and display Web pages. The two most popular browsers are Netscape Navigator and Microsoft Internet Explorer. Both of these are graphical browsers, which means that they can display graphics as well as text. In addition, most modern browsers can present multimedia information, including sound and video, although they require plug-ins for some formats.
More and more office software is becoming web-enabled or web-based; that is, it will run over the Internet or an intranet within the browser. In effect, this is using the browser in a manner very similar to the operating system – the browser effectively becomes a cut-down operating system. Which means, of course, that all of our concerns about operating system security apply equally to the browser.
But your browser is even more susceptible to attack. It is, after all, almost always connected to the very insecure Internet, wherin doth lurk all many of dark and nasty things… And it has its own pathway through your firewall – it has to, otherwise you wouldn’t be able to access the Internet. If hackers can find a weakness in your browser, it is very likely that they will be able to gain access to your computer.
For this reason, you must take every care to patch your browser as soon as any patches become available. Join the Home User WARP (more about WARPs here) to ensure that you hear about the latest problems and solutions relevant to whichever browser you use.
A browser hijacker is a program or code that changes your browser settings so that you are redirected to different Web sites. Most browser hijackers alter the default home pages and search pages to those of customers who pay for the traffic generated. Some add pornographic Web sites to the users› favorites; generate pornographic pop-up windows faster than the user can click them shut; and redirect users to pornographic sites.
Many of the hijackers are also poorly coded, and can cause their own problems – slowing down or crashing the PC.
Hijackers are usually the result of a virus or worm infection, or installed surreptitiously when visiting dubious sites. Some, however, are installed ‹legitimately› as part of freeware – but only because the user didn’t read the small print in the end user licenseâ€¦
Brute Force attack
A type of attack in which every possible key is attempted until the correct key is found. Ciphertext is deciphered under different keys until recognizable plaintext is discovered. On average, this will take half as many attempts as there are keys in the keyspace.
“To crack a 64-bit key, it would take 10 EFF DES Crackers operating for an entire year. At 128 bits, it is totally infeasible to break a key by brute force, even if all the computers in the world are put to the task. To break one in a year would require, say, 1 trillion computers (more than 100 computers for every person on the globe), each running 10 billion times faster than the EFF DES Cracker. Put another way, it would require the equivalent of 10 billion trillion DES Crackers!”
Hiding Crimes in Cyberspace, Dorothy E. Denning and William E. Baugh, Jr. July 1999.
BS 7799 / ISO 17799
Developed by the British Standards Institution (BSI) in conjunction with an international user group, BS7799 represents industry-developed standards on “Information Security Management”. BS7799 is divided into two parts.
Part 1: Code of Practice for Information Security Management.
BS 7799-1:1999, effective 15 May 1999: a standard code of practice that provides guidance on how to secure an information system.
The International Standards Organisations (ISO) accepted Part 1 in December 2000 as a new international standard and has subsequently been published as ISO 17799. ISO 17799 replaces BS 7799-1:1999.
Part 2: Specification for Information Security Management Systems,
BS 7799-2:2002, effective 5 September 2002: specifies the management framework, objectives, and control requirements for information security management systems. This version of the standard has been aligned with other management systems Standards including the ISO 9000 and ISO 14000 series.
ISO 17799 and BS 7799-2 has been adopted for use in many countries around the world including the UK, Australia, Brazil, Czech Republic, Canada, Denmark, Germany, Iceland, India, Ireland, Japan, Korea, Malaysia, the Netherlands, New Zealand, Norway, Poland, Singapore, South Africa, Sweden, Switzerland, Taiwan and United Arab Emirates.
An internationally recognised certification scheme has also been formally implemented to enable organisations to be certified against BS 7799-2. The certification is performed by internationally recognised accredited organisations. In excess of 160 BS 7799-2 certificates have been issued throughout the world including countries such as: Australia, Austria, Brazil, China, Egypt, Finland, Germany, Greece, Hong Kong, Hungary, Iceland, India, Ireland, Italy, Japan, Korea, Netherlands, Norway Singapore, Spain, Sweden, Taiwan, UAE, UK, USA.
A buffer is an area of memory used to hold data for processing. It has a predetermined size.
If the data being placed into the buffer is too large, is not checked and is allowed to overflow the buffer, it can have unexpected effects. At best, the excess data is simply lost. At worst, the excess data might overwrite other legitimate data.
Understanding what happens to buffer overflows can allow a hacker to take control of a system – or simply crash the system.
A particular type of buffer overflow attack is an attack on the program stack (sometimes known as ‹smashing the stack›). The program stack is used to control the flow of execution of the program. By carefully controlling the buffer overflow, a hacker can overwrite and change the return address of a function – and execute code of his own choice.
This type of buffer overflow is the most common and often the most effective type of remote attack.
An Internet mailing list operated by SecurityFocus, which is now owned by Symantec. BugTraq is famous, or infamous, for its policy of full disclosure.
In a brief FAQ issued at the time of Symantec’s takeover in July 2002, the BugTraq team said:
Q. Will Symantec change SecurityFocus› vulnerability reporting policy?
A. We believe that in order for the SecurityFocus/Bugtraq community to be effective, it must be an independent entity. We believe that its current disclosure policy is appropriate for the venue. Symantec will continue to operate with its separate disclosure policy.
If BugTraq subsequently disappears or its full disclosure policy ceases, one will be reminded of the remarks of Henry II: “Who will rid me of this troublesome priest?”
C2 is a security classification/standard that is specified in TCSEC, otherwise known as The Orange Book. It is not a particularly stringent specification, but is nevertheless the most sought-after (probably because it is the easiest genuine security classification to achieve). Vendors who achieve a C2 classification for their products are not slow to boast about it.
The main criterion is that a C2 system should enforce DAC. Systems in class (C2) enforce a more finely grained discretionary access control than (C1) systems, making users individually accountable for their actions through login procedures, auditing of security-relevant events, and resource isolation.
In time, we can expect TCSEC security classifications to be replaced by Common Criteria classifications. C2 is equivalent to Common Criteria’s EAL3.
In the UK, the Cabinet Office
manages the central intelligence machinery and runs a number of committees that
have a role in considering cryptography and information security issues.
A very simple encryption based on letter rotation in a single alphabet, so called because Julius Caesar supposedly used it to encrypt his letters from Gaul. The Caesar Cipher simply substitutes each character with the third character along the alphabet.
A = D
B = E
C = F
D = G
E = H
F = I
G = J
H = K
I = L
J = M
K = N
L = O
M = P
N = Q
O = R
P = S
Q = T
R = U
S = V
T = W
U = X
V = Y
W = Z
X = A
Y = B
Z = C
Hello World becomes Khoor Zruog. It is not an effective algorithm because it does nothing to hide linguistic patterns. For example, in any given sentence of Caesar ciphertext, the most frequent character will almost inevitably equate to ‹e›. Once you have this association you will soon be able to work out other words and other associations.
The Caesar Cipher could be described as +3, modulo 26; or simply rot-3. But by association, any simple rot(N) cipher is sometimes described as a Caesar Cipher. Rot-13 is often used today on the Internet, not so much to hide the content as to make it not immediately apparent – perhaps to avoid giving away the plot when discussing a new film or novel, or perhaps to confuse content filtering software.
A security procedure for identifying a remote device. When a remote device attempts to dial in, the host system disconnects the call and then dials the number associated with the authorized user in order to re-establish the connection.
US anti-spam law. The full title is The Controlling the Assault of Non-Solicited Pornography and Marketing Act. It requires that unsolicited commercial e-mail messages be labeled (though not by a standard method) and that they include opt-out instructions and the sender’s physical address. It prohibits the use of deceptive subject lines and false headers in such messages, and the FTC is authorized (but not required) to establish a “do-not-email” registry.
The full text can be found here: www.spamlaws.com/federal/108s877.html
Originally a breakfast cereal popular in the USA. One of the best-known free gifts included in the pack was a whistle which, it was discovered, generated a tone of the same frequency as used by public pay phones to signal to the exchange that a coin had been inserted. Thus, blowing the whistle down a phone resulted in a free call.
The first hacker to exploit this loophole also went under the pseudonym of Cap’n Crunch. (He was also known as the Pirate and Crunchmeister; and like many other erstwhile hackers now makes a living in the legitimate security industry.)
CAPSTONE was the name of a US Government project to develop a set of standards for publicly-available cryptography, as authorized by the Computer Security Act of 1987. It envisioned the use of the Clipper chip (an integrated circuit — the Mykotronx, Inc. MYK-82 — with a Type II cryptographic processor containing crypto functions for e-mail applications using Skipjack, KEA, DSA, SHA, and basic mathematical functions to support asymmetric cryptography). It also included the key escrow feature of the Clipper chip.
The Clipper chip failed commercially, and is no longer manufactured.
A security technique that ensures that a human has made the transaction online rather than a computer. It is also known as «Automated Turing Tests» and was originally developed at Carnegie Mellon University. Random words or letters are displayed in a distorted fashion so that they can be deciphered by people, but not by software. This usually involves the use of graphic images of characters and numbers. Users are asked to type in what they see on screen to verify human involvement.
The purpose is to prevent bots (software agents) from performing automatic illegitimate transactions. These could include overloading online opinion polls, performing dictionary attacks to find names and passwords and grabbing thousands of free e-mail accounts for sending spam. Captchas can be used to prevent such transactions as they ensure that a real person rather than a bot made the transaction.
ITsecurity.com uses a ‹captcha› as its ‹security code› when a visitor joins the community and creates an account. In our case we do not distort the images. The advantage is that there are fewer mistakes made by people logging on (and it is therefore less frustrating); but the disadvantage is that it would be easier (but still relatively difficult) to develop software able to interpret the code.
Cardholder Information Security Program (CISP)
Visa’s CISP defines a standard for protecting cardholder information. It is an ongoing development, where the initial effort has focused on the e-commerce acceptance channel, including an active program to ensure annual validation of selected, major e-merchants› security positions.
In the future, CISP will be expanded to include mail/telephone, face-to-face, and wireless acceptance channels.
Carnivore is the original name for an e-mail surveillance system operated in the USA by the FBI, now called DCS1000. It involves the installation of a computer system at Internet Service Providers. Carnivore then intercepts all messages sent to or from the target of investigation.
The FBI apparently dubbed the system Carnivore because of its ability to get to the meat of the investigation. An earlier version of the system was called Omnivore, which simply sucked in gigabytes of data every hour, but in an indiscriminate manner. Carnivore allows the FBI to pluck out the one stream of data they are seeking from all the others.
CESG Architecture for Secure Messaging. This is a secure mail architecture compatible with the US DOD Defense Messaging System, DMS. CASM is a CESG programme that provides advice and solutions to Government departments on securing electronic mail systems.
CAST5 is a block cipher developed by Carlisle Adams and Stafford Tavares. It is a fast cipher with variable length keys (between 40 and 128 bits in multiples of 8 bits) and 64-bit blocks. It has no known weaknesses.
CAUCE, The Coalition Against Unsolicited Commercial Email is an ad hoc, all volunteer organization, created by Netizens to advocate for a legislative solution to the problem of UCE (a/k/a “spam“).
In the UK, the Central Computer and Telecommunications Agency.
The CCTA also handles pan-government matters in Information Technology and Telecommunications and provides resources to support those government departments that do not employ their own expert IT staff.
Until the early 1990s the CCTA had responsibility for setting policy on the security and privacy protection required for all government information designated as ‹sensitive but unclassified›. In outline, classified information is information which, if revealed, would damage the UK – this was handled by CESG with CCTA handling the rest. However when CCTA became interested in cryptographic protection in the early 1990s, CESG moved immediately to take over their duties in setting protection policy for this class of information.
(This reference is left for historical purposes – but CCTA no longer exists.)
Compound Document Architecture.
Used by the Microsoft Office™ suite of programs for the internal structure of documents, it provides an environment where different document types can be embedded within each other. Within the context of content security, a clean Word document with an embedded Excel spreadsheet infected with the Laroux virus would fail to trigger most, if not all, anti-virus tools. Therefore files must be broken down to raw data to enable embedded objects to be presented to anti-virus tools in their natural state.
<font size=2>from Content Technologies› Guide to Content Security, 2nd Ed</font>
“The CULT OF THE DEAD COW (cDc) is the most influential group of hackers in the world. Formed in 1984, the cDc has published the longest running e-zine on the Internet, swallowed swords, made waffles, and so on.” – The Deth Vegetable, cDc, Minister of Propaganda (aka ‹veggie›).
But don’t be fooled. The technical ability found within the members of cDc and other similar hacker communities is beyond compare (see L0pht).
cDc is behind Back Orifice. Whatever others might think of the ethics of producing such applications, cDc sees itself as providing a positive service to the computer community by exposing weaknesses in commercial systems, and thereby forcing the developers to strengthen those systems.
Center for Education and Research in Information Assurance and Security, at Purdue University; Director: Eugene Spafford.
The Computer Emergency Response Team, which publishes warning of security alerts. There are many other ‹response teams›; but CERT and CIAC are the best known and most important.
The Computer Emergency Response Team/Co-ordination Center (CERT/CC) is located at Carnegie-Mellon University (CMU). It was established in December 1988 by the Defense Advanced Research Projects Agency (DARPA) to address the computer security concerns of research users of the Internet. It is operated by the Software Engineering Institute (SEI) at CMU.
The team rapidly confers with experts to diagnose and solve security problems, and to establish and maintain communications with the affected computer users and government authorities as appropriate.
The CERT/CC serves as a clearing house for the identification and repair of security vulnerabilities, informal assessments of existing systems, and both vendor and user security awareness. In addition, the team works with vendors of various systems in order to coordinate fixes for security problems.
CERT sends out security advisories to the CERT-ADVISORY mailing list whenever appropriate. It also operates a 24-hour hotline that can be called to report security problems (e.g., someone breaking into your system), as well as to obtain current, accurate information about security problems.
More correctly, we should call it a ‹digital certificate› when we mean a certificate for electronic use.
In general use, a certificate is a document that attests to the truth or ownership of something. A digital certificate is a digital document that serves the same purpose. Most specifically, it attests to the truth that you are who you say you are, and that you own the particular public key specified in the certificate.
Only Alice, using her private key, can decrypt a message that has been encrypted by her public key. Alice can therefore prove that she owns the public key. The digital certificate confirms that the owner of that particular public key is Alice. Thus the digital certificate binds Alice and her public key in a sort of digital ID Card. But, like an ID Card, it says nothing about the trustworthiness nor creditworthiness of Alice – merely that this person is Alice. Its purpose is to provide trust and prevent fraud.
There are three main practical uses for digital certificates:
Certificates are issued by a Certification Authority. In their simplest form, they could just contain a public key and a name. To be of any practical use, however, the certificate will also contain an expiration date, the name of the CA that issued the certificate, a serial number, and probably additional information. In particular, it should contain the digital signature of the certificate issuer.
X.509 v3 ITU-T Recommendation
X.509 [CCI88c] specifies the authentication service for X.500 directories, as well as the widely adopted X.509 certificate syntax. Version 3 addresses some of the security issues and limited flexibility apparent in the earlier versions. This appears to be the most widely adopted certificate specification, and can be taken as the standard certificate.
The IETF is also working on an alternative certificate format as part of the Simple PKI proposals. This approach requires none of the hierarchical certificate authorizations required by X.509 certificates, and has much merit. X.509 has, however, a considerable head of steam, and is unlikely to be threatened by SPKI
PGP’s trust model, unlike X.509, is nonhierarchical. It is based on a ‹web of trust‹, rather like old-fashioned ‹introductions› in polite society. Bob knows and trusts Jim. He can therefore also trust Alice because she was introduced (that is, verified) by Jim.
The act of revoking a digital certificate prior to its natural date of expiry. Certificate revocation is performed by the Certification Authority that issued the certificate, and is enacted by placing the certificate’s serial number into a Certificate Revocation List.
Revocation would be necessary, for example, if the private key in a public key cryptosystem becomes compromised. Within an organization, the company needs to revoke the company certificate of staff who leave or are sacked.
Certificate Revocation List (CRL)
Digital Certificates occasionally need to be revoked before their specified expiry date. There are many reasons, including:
* a woman marries and changes her surname
* a member of staff (with a company certified certificate) resigns or is fired
* the owner’s public key is compromised
If any of these happen, the certificate can no longer be trusted.
A CRL is the mechanism by which people can be certain that a certificate is still valid. It is nothing more than a list of digital certificate serial numbers stored in a special directory (probably an LDAP directory), although it could contain additional information such as the reason for revocation. If the serial number is in the list, the associated certificate has been revoked.
CRLs are one of the most problematic areas of PKI. If you don’t check the CRL, you cannot be certain the the certificate hasn’t been revoked. If you do check the CRL, you can only be certain that the certificate hadn’t been revoked at the time the CRL was last updated. The CRLs themselves are maintained by CAs for certificates that they have issued. As global e-commerce grows, the number of issued certificates will multiply – and so, inevitably, will the number of revoked certificates. Users will then need to decide, almost on a case by case basis, whether it is worth checking the CRLs for each certificate. Obviously, the greater the value or importance of the associated transaction, the more important it is to check the certificate.
But we shouldn’t get paranoid – we already transact business in the real world without necessarily checking everything. And CRLs make electronic verification a lot easier.
ii. The act of confirming the identity of an entity through the use
of a digital certificate.
Formal evaluation of a security system to determine whether the system conforms to a specified set of security requirements and standards (that is, for accreditation purposes). The main technical certification standards are the US TCSEC and the European ITSEC schemes. These and others are expected to be replaced by the Common Criteria (CC), which is a new certification standard incorporating the best features of the existing standards.
Formal ratification of conformance to a recognised standard, such as BS7799; part 2 of which specifies the management framework, objectives, and control requirements for information security management systems. Other relevant standards bodies include ISO, FIPS, IEEE, ITU-T (formerly CCITT), and NIST.
Certification Authority (CA)
A certification authority is a trusted third party who confirms the identity of an organization or individual (an entity). The CA will first satisfy itself that the entity is exactly who or what it claims to be, and will then issue that entity with a ‹certificate›. The certificate is likely to be an X.509 standard certificate. It can be presented electronically to the CA for verification and confirmation at any time by a trading partner. In some ways the certificate is analogous to a credit card. Both the certificate and the credit card allow two parties to trade with some degree of security without any further proof of identity.
CAs are a fundamental part of PKI and ‹trust› in electronic commerce. Traders need to trust the certificates issued by CAs in order to be confident in the identity of the trading partner. They therefore need to trust the CA.
Any organization can issue certificates, and it is frequently advantageous for organizations to issue their own certificates for their own members of staff. In such an instance, the company itself is the CA. In order for these certificates to be trusted by third parties, the company will itself seek a certificate from another CA – who may in turn need to have a certificate issued by another CA.
Certificate Authorities thus act in an hierarchical structure, each CA being ‹certified› by another CA until one of known and unimpeachable trust is reached. This ‹highest level› CA is known as a ‹root› CA. Its own certificate is self-signed (it is self-certificated) because there is no higher CA.
Certification Practice Statement – CPS
The CPS is defined within RFC2828 as “a statement of the practices which a certification authority employs in issuing certificates”, but has come to be used as a means of liability limitation. One leading CA has the following statement in its CPS: ThisCA does “not warrant the accuracy, authenticity, reliability, completeness, currentness, merchantability, or fitness of any information contained in certificates or otherwise compiled, published, or disseminated by or on behalf of issuing authorities and ThisCA, shall not incur liability…»
Which all begs the question: If the CA itself doesn’t trust what it does, how can anyone else? Indeed, Australian academic Roger Clarke has written: «The legal situation is far from clear; but it seems that CAs might well be successful in their attempts: a relying party appears to have little or no legal protection, not just if the CA was wrong, but even if the CA was negligent. To the extent that CAs successfully avoid liability when the assurance is misplaced, their role in the process is valueless.»
In the UK, the Communications Electronic Security Group.
CESG is the part of GCHQ that is responsible for protecting UK government communications. In the jargon this is COMSEC – communications security. CESG also has responsibility for computer security – COMPUSEC – and for protective information security – INFOSEC. It likes to be called the ‹UK National Authority› for such matters although its mandate in respect of other government departments is advisory.
Its main responsibility is for designing and approving cryptographic algorithms for UK government use and for implementing them in prototype form. For some government departments it also builds complete communications systems but for others it simply supplies cryptographic algorithms or hardware.
CESG is located on the GCHQ Benhall site in Cheltenham. It used to be funded centrally but has now moved onto a repayment basis in which a significant part of its income must be obtained from its customers for services provided (for example, in its role as arbiter for UK ITSEC and CC certification). On the one hand this means that the process of ITSEC/CC certification in the UK can be considered expensive, while on the other hand it is forcing a government agency to become involved in real life commercial issues. Ultimately, this latter can only be a good thing.
It has to be said that security purists have doubts over the real motives behind CESG activities; while security vendors are driven by the commercial reality of recognizing CESG procurement potential.
“As the UK government’s technical authority for Information Security (Infosec), we have a key role in helping ensure up-to-date guidance and standards are available to those who need them. This work is managed as a «common-good» activity on behalf of all UK government departments and agencies. Much of the general Infosec guidance we produce is also relevant to local government and comparable public sector applications.” – CESG
Common Gateway Interface
“CGI scripts are commonly used on Web sites to achieve customised results. Generally, when the visitor performs some action, such as filling in a form or clicking on a link, the server executes a script using information input by the visitor. This allows the appearance or behaviour of the Web site to be customised for that visitor.
“Of course, this means that the server is executing a program at the request of an outsider, using input provided by that outsider. Since many programs behave incorrectly when presented with illegal or invalid input, it may be possible for an attacker to “feed” a CGI script with input that will cause it to misbehave in such a way that the attacker can hack the site.
“Additionally, some CGI scripts record the information input by the visitor (for example, scripts to allow on-line shopping may record the contents of the current visitor’s virtual shopping basket) in temporary files. If the scripts are incorrectly configured by the administrator, these files may be written in amongst the data files making up the Web site itself, from where an informed attacker might be able to retrieve them later using a Web browser.”
from Sophos› V-Files
The Common Gateway Interface is an interface specification that allows static HTML web pages to talk to servers that understand the HyperText Transfer Protocol (HTTP). Data input by a user in an HTML form is sent to a CGI program on the server, which can either perform some basic data processing and return the results, or act as a conduit passing the input from the client Browser to the server. CGI remains the most common method for creating dynamic web.
It is also the most common means of attacking a web server. The basic problem is that CGI programming is relatively easy and quick. It tempts companies into using staff who may be less experienced than those they assign to their mission critical applications; and it even seduces experienced programmers into being less careful than they should. But as commerce moves onto the Internet, CGI is itself becoming mission critical.
Typical security holes include passing tainted input directly to the command shell via the use of shell metacharacters, using hidden variables specifying any filename on the system, and otherwise revealing more about the system than is good. The best-known CGI bug is the ‹phf› library shipped with NCSA httpd. The ‹phf› library is supposed to allow server-parsed HTML, but can be exploited to give back any file.
The usual name of the server directory in which CGI programs are held.
Challenge Handshake Authentication Protocol (CHAP)
A protocol supported on Point-to-Point Protocol (PPP) links to authenticate network peers. It uses a randomly-generated challenge and requires a matching response that depends on a cryptographic hash of the challenge and a secret key. CHAP is defined in RFC 1334.
An authentication process that requires a correct reply be provided as response to a given challenge. The response is usually a value computed from an unpredictable challenge value.
Chaos Computer Club
A hacking group founded in 1984 in the then West Germany. It is most famous, or infamous, on two counts:
* Members were once paid by the KGB to hack into US computers and obtain their secrets; but the data obtained was not considered to be worth its cost to the KGB — and the members were tracked down and arrested through the work of Clifford Stoll (who wrote about it in The Cuckoo’s Egg: Tracking a Spy Through the Maze of Computer Espionage).
* More recently members demonstrated live on German TV that ActiveX could be used to access private bank acounts and transfer funds illegally.
A checksum is a value that is used to check the integrity of data.
Checksums are generated by a function that is dependent upon the the data in question. For security purposes, checksums are generated by one-way hash functions. Once a checksum has been generated, it is either stored with or transmitted with the data in question. The integrity of the data can be checked by generating a new checksum. If the two checksums are identical, then the file has not changed. If the two checksums are different, then the data (or file) in question has been altered.
Checksums are used for three primary purposes within security:
* to confirm that archived data has not been altered since it was archived
* to confirm that a verified virus-free file has not changed and therefore is still virus-free
* to confirm that a transmitted message has not been altered in transit.
Children’s Online Privacy Protection Act of 1998 (COPPA)
COPPA comes under the auspices of the US Federal Trade Commission (FTC). You can get more details of COPPA and the FTC’s Children’s Online Privacy Protection Rule from the FTC site: http://www.ftc.gov/
Chinese Wall Model
A security model proposed by Brewer and Nash. It seeks to prevent information flow that can cause a conflict of interest (using a business consultancy as its own model). For example, write access is only granted if no other object containing unsanitized information can be read.
A chosen-ciphertext attack is one in which a cryptanalyst attempts to determine the key from knowledge of plaintext that corresponds to ciphertext selected (i.e., chosen) by the analyst. This type of attack is generally most applicable to public-key cryptosystems, for once the (private) key is known, all subsequent messages from the same source can be deciphered.
A chosen-plaintext attack is one in which a cryptanalyst attempts to determine the key from knowledge of ciphertext that corresponds to selected plaintext (i.e., chosen) by the analyst. This type of attack is generally most applicable to public-key cryptosystems, for once the (private) key is known, all subsequent messages from the same source can be deciphered.
CIAC, the Computer Incident Advisory Capability, is the computer security incident response team for the US Department of Energy (DOE) and the emergency backup response team for the National Institutes of Health (NIH). CIAC is located at the Lawrence Livermore National Laboratory in Livermore, California. CIAC is also a founding member of FIRST, the Forum of Incident Response and Security Teams, a global organization established to foster cooperation and coordination among computer security teams worldwide.
CIAC performs a role similar to that of CERT.
A cryptographic algorithm for encrypting and decrypting data.
The act of encrypting data with a cryptographic algorithm.
Ciphertext is data that has been scrambled by encryption so that can only become meaningful again by the application of proper decryption; that is, it needs to be deciphered.
An attack against (that is, an attempt to decrypt) ciphertext when only the ciphertext itself is available (ie, there is no known plaintext associated with the ciphertext). This will almost inevitable require guessing some plaintext that might or will be associated.
The cryptanalyst may also know the cryptographic algorithm being used, and possibly the plaintext language. However, such an attack is rarely successful against a good cryptographic system.
Circuit level gateway/firewall
A circuit level gateway is sometimes described as a second generation firewall. It is a fast unrestricted passage through the firewall based on predefined rules maintained in the TCP/IP kernel.
It is basically used for TCP connections. It examines each connection setup to ensure that it follows a legitimate handshake for the transport layer protocol being used. Typically, it would store the following information:
* a unique session identifier (used for tracking purposes)
* the state of the connection; ie, handshake, established, or closing
* sequencing information
* source IP address
* destination IP address
* physical network interface through which the packet arrives
* physical network interface through which the packet leaves
The firewall then checks to see whether the sending computer has permission to send to the destination, and whether the receiving computer has permission to receive from the sender. If the connection is allowed, all associated packets are routed through the firewall with no further security checks.
* generally faster than application layer firewalls because they perform fewer evaluations.
* can help protect an entire network by prohibiting connections between specific Internet sources and internal computers.
* can perform NAT to shield internal IP addresses from external users.
* cannot restrict access to protocol subsets other than TCP
* cannot perform strict security checks on a higher-level protocol
* limited audit abilities, but can typically tie a network data packet to an application layer protocol by building limited forms of session state
* do not offer value-added features, such as HTTP object caching, URL filtering, and authentication because they do not understand the protocols being used
* difficult to test «allow» and «deny» rules.
CISSP is the award for successful completion of the examination in computer security administered by the International Information Systems Security Certification Consortium: ISC2 (www.isc2.org). Subjects covered include access control, cryptography, network security and disaster recovery.
CISSP is a highly regarder ‹qualification› for security specialists and consultants.
In the UK, the Central Information Technology Unit.
CITU is responsible for Information Technology policy and strategy spanning government departments and for promoting the use of IT in the delivery of government services to the public.
(This reference is left for historical purposes – but CITU no longer exists.)
A security approach that addresses the security requirements of business applications and is primarily concerned with the integrity of data. These requirements concern internal consistency (enforced and maintained by the computer system), and external consistency (enforced and maintained by outside means such as auditing). The primary mechanisms are the insistence that users have access to programs that can manipulate the data rather than to the data items themselves; and that separation of duties prevents collusion to compromise security.
Where data is given different levels (or degrees) of confidentiality, the clearance level is the security level to which a user has been given authorized access.
The Clipper chip was perhaps the most notorious and public attempt by the US Government to provide law enforcement agencies access to encrypted public communications. The theory was that the chip would be included in all telecommunications hardware, providing the user with encryption, and the government with the ability to decrypt the encryption.
The scheme involved a Skipjack key common to all chips and protecting the unique serial number of the chip; and a second Skipjack key unique to each chip protecting all the data encrypted by the chip. The second key is escrowed as split key components held by NIST and the U.S. Treasury Department.
“Escrow in Clipper works as follows. Every Clipper chip bears a unique serial number and has a unique encryption key (the «chip-unique key») that is burnt in by the manufacturer under secure conditions. The chip-unique keys are split into two pieces with each half held by an «escrow agent.» Currently the two escrow agents are NIST, in the Department of Commerce, and the Treasury Department’s Automated Systems Division.
“When Alice initiates a secure Clipper communication with Bob, the two Clipper chips first agree on a one-time session key for the communication. They then exchange Law Enforcement Access Fields («LEAFs»), a stream of bits that carries the data law enforcement would need to get access to the session key for that telephone call. To prevent «rogue» encryption, Clipper chips will not communicate with each other until they exchange valid LEAFs.” A. Michael Froomkin, 1996, 1996 U. Chi. L. Forum 15
The Clipper project failed. It was not taken up by the public and relatively few devices with Clipper chips were ever sold.
Cloud Cover is a CESG project that aims to set standards to foster the development by industry of Public Key Infrastructure (PKI) products and services to meet the electronic key distribution requirements of HMG.
Cloud Cover is not an infrastructure itself, but rather an attempt to stimulate and participate in the growth of a pan-government PKI. The intention is to spur the development of commercial-off-the-shelf (COTS) products that meet both commercial and government requirements, so that government can achieve savings through economies of scale. To this end, CESG tries to make use of open specifications as they become stable, and is basing the development of pilot systems on existing PKI products.
The strategy is to develop a number of small, pilot installations to provide the stimulus for the development of HMG wide, interoperable PKI solutions.
Cloud Cover results will be rolled into the GSI (Government Secure Intranet).
Windows has a virtual clipboard which transparently saves anything you Cut (Ctrl-X) or Copy (Ctrl-C) from one application until you Paste (Ctrl-V) it into another. The clipboard contains one object at a time, but may store many different representations of the object. Sometimes, the clipboard will not contain the object itself, but a reference to it. If you copy an EXE file to the clipboard from Explorer, for instance, the clipboard remembers the name and location of the file, not the file itself. The contents of the clipboard can be saved to disk as a CLP file.
So, although CLP files may contain viruses or virus fragments that have been copied from another application, they cannot themselves be infected.”
from Sophos› V-Files
Computer Operations, Audit, and Security Technology – used to be a multiple project, multiple investigator laboratory in computer security research in the Computer Sciences Department at Purdue University. Its work has now become CERIAS, still at Purdue.
CRONOS Operational Assessment of Security Technologies: a NATO project aimed at piloting PKI technology to help NATO to:
* Establish its requirements for PKI;
* Establish the resource implications of running a PKI.
CESG is running a joint pilot with France and Germany to develop interoperable PKI and messaging systems. CASM has provided leadership for the trilateral pilot, and has developed client applications in collaboration with Compaq.
Cloud Cover has provided the necessary PKI standards and has developed a CA system under contract with Baltimore. When the systems have all been deployed, the project will have demonstrated interoperable solutions from three countries and at least five different developers.
To achieve this, the PKI has had to make use of very simple key management protocols and certificate profiles. Nevertheless, interoperability has proved difficult to achieve and shows the inherent problems in getting disparate implementations to interwork. However, simple as the COAST demonstrator is, it will provide a sound basis on which to build more complex, functionally rich, yet interoperable PKI systems.
The property of being ‹collision free› is one of the requirements of the results of a cryptographic hash function. Collisions occur when two different inputs can produce the same output.
If the hash function is being used to generate a message digest as part of a digital signature, the hash function must be collision free. If it were not, you could not prove a connection between the person and this particular message – that is, the message is not provably signed.
Common Criteria (CC)
More properly the Common Criteria for Information Technology Security, the Common Criteria or CC is an international security evaluation certification that is designed to improve upon and replace the various national evaluation schemes. Work to improve the evaluation criteria has been sponsored by CESG and its counterparts in Canada, France, Germany, the Netherlands and the USA, with the aim of replacing existing criteria (FC/TCSEC, ITSEC, CTCPEC) with a single, international standard; the Common Criteria (CC).
The agreement to develop the CC grew out of a February 1993 European Commission-sponsored workshop in Brussels on the FC-FIPS that was presented by NIST and attended by many European security professionals. The USA and Canada haad already agreed to align FC and CTCPEC criteria, and the effect of the Brussels meeting was a general agreement that the time was right to harmonize European and North American security evaluation criteria.
The final objective is that certificates of evaluation issued by any of the sponsoring countries should be recognized by all co-sponsors – reducing the cost for developers and increasing the choice for users. Work began in 1993 and the project produced version 1.0 of Common Criteria in January 1996. These were refined by an extensive period of review and user trials; version 2.0 (final draft) was published in December 1997.
Version 2.0 of Common Criteria was finalized in May 1998 and was soon afterwards submitted to the International Standards Organization (ISO) for acceptance as an international standard. Version 2.1 of the Criteria is equivalent to ISO’s International Standard 15408.
Common Criteria Assurance Levels
EAL1: functionally tested
EAL2: structurally tested
EAL3: methodically tested and checked
EAL4: methodically designed, tested and reviewed
EAL5: semiformally designed and tested
EAL6: semiformally verified design and tested
EAL7: formally verified design and tested
Common Vulnerabilities & Exposures (CVE)
A list of standardized names for vulnerabilities and other information security exposures. The content of the CVEs is a result of a collaborative effort of the CVE Editorial Board. The Editor Board includes representatives from numerous security-related organizations such as security tool vendors, academic institutions, and Government as well as other prominent security experts. The goal of CVE is to make it easier to share data across separate vulnerabilities database and security tools.
The current (as of publication) CVE version is: 20010507 and all vulnerabilities are listed with a unique number and references associated with it. CVE classifications are listed by:
* The name of the CVE (also referred to as a number)
* A brief description of the security vulnerabilities or exposure
* Any pertinent references
For more details, visits the CVE Web site at www.cve.mitre.org/about.
Some security purists believe that you can never again trust a computer that has been compromised since you have no way of knowing what has been done to it. It requires that the system be completely rebuilt from scratch. This is true in theory, but might be too extreme for most users in practice.
(Computing Technology Industry Association, Lombard, IL, www.comptia.org) Formerly ABCD? Microcomputer Industry Association, it is a membership organization of resellers, distributors and manufacturers dedicated to business ethics and professionalism, founded in 1982. It sets voluntary guidelines and is involved with many business issues.
CompTIA is also well known for its certifications for computer professionals, including CompTIA Security+ (security specialist).
A general term to cover fraud perpetrated by or on computers and therefore including Internet fraud.
“NCIS assesses that Internet fraud is an emerging threat that will increase significantly in the coming years, albeit from a low base. The huge growth in the Internet population and in e-commerce will provide opportunities for fraud, with share-pushing being a particularly simple and effective means of defrauding innocent investors and making large sums of money. Compared with the totality of fraud, however, the significance of Internet fraud should not be over-stated.” – NCIS, Project Trawler
One of the most common types of Internet fraud is commonly known as the Nigerian or 419 Scam. On 12 July 2002, the International Chambers of Commerce (ICC) found it necessary to issue the following warning:
“The ICC’s crime-fighting bureau is warning all internet users to beware e-mails requesting help to secretly transfer tens of millions of dollars out of Africa, even if the senders claim to be government officials or former heads of state.
The alert was sparked by a new online spin to an existing fraud, named the “419 scams” after a section in the Nigerian Penal Code.
Fraudulent e-mails typically request the recipient’s bank account details so that a large sum of money can be transferred briefly to their account. Recipients are told they will receive up to 25 percent of the remittance in return for the service, but are warned that confidentiality is vital…”
Within a few days of this warning, the I received the following:
I am Mr. Kenneth Uba, the manager bills and exchange at the Foreign Remittance Department of the Union Bank of Nigeria Plc. <br> <br> I am writing this letter to ask for your support and cooperation to carry out this business opportunity in my department. <br> <br> We discovered an abandoned sum of $31,000,000.00 (Thirty One million United States Dollars only) in an account that belongs to one of our foreign customers who died along with his entire family of a wife and two children in November 1997 in a Plane crash…
I was then offered 20% of this sum to help Mr Uba recycle the funds in order to buy agricultural machinery.
To commence this transaction, we require you to immediately indicate your interest by a return e-mail and enclose your private contact telephone number, fax number full name and address and your designated bank coordinates to enable us file letter of claim to the appropriate departments for necessary approvals before the transfer can be made.
We didn’t get that far, but had I shown interest, communication would rapidly switch to fax. And then Mr Uba would have my bank account details – and a sample of my signature…
It’s surprising how many people are still taken in by this!
The third highest of the main standard security classifications. (In the USA, ‹confidential› is the lowest of the classifications above the non-classification ‹unclassified›).
UK Government: The compromise of confidential information would be likely to:
* materially damage diplomatic relations,
* prejudice individual security or liberty,
* cause damage to the operational effectiveness or security of UK or allied forces or the effectiveness of valuable security or intelligence operations,
* work significantly against UK economic and commercial interests,
* cause significant financial loss to government or to other organisations or individuals,
* lead to significant additional government expenditure,
* impede the investigation, or facilitate the commission, of serious crime,
* impede seriously the development or operation of major government policies,
* to shut down or otherwise substantially disrupt significant national operations.
Maintaining the confidentiality of information is one of the four fundamentals of information security requirements (the others being accountability, availability, and integrity). Confidentiality ensures that only those entities (both users and resources such as printers and other devices) that are authorized to access data may do so. It should not be confused with privacy, which is a different concept.
Data whose confidentiality has failed is said to be compromised.
Content Scrambling System (CSS)
CSS combines player-host mutual authentication and data encryption. It is used by content providers (DVDs, CDs, e-Books) to prevent piracy and impose regional viewing restrictions. It is usually enforced by the DMCA and/or local copyright laws.
Simplistically, using a DVD as an example, it comprises three main components: the DVD, the DVD drive/player, and the host/computer.
* The DVD contains the encrypted content, has a ‹hidden› area that includes the encryption keys, and a region code indicating where in the world it is intended/allowed to be viewed
* The drive/player stores the keys used to decrypt the DVD encryption key, a regional code defining where it was intended to be sold, and an authentication secret used with the computer/host
* The computer/host has a secret used for authentication with the drive/player.
The way in which these three elements interwork can reduce piracy and prevent a DVD bought in the USA from playing on a device bought in Europe.
It has to be said that cryptographically this is a weak system; but the content industry has a growing tendency to enforce it vigorously through copyright laws. A 15 year-old Norwegian student developed a decryption system (DeCSS), but has now been prosecuted by the Norwegian Authorities at the vigorous behest of the Motion Picture Association of America (MPAA). 2600 magazine was subsequently ordered by the US courts to remove DeCSS from its online publication, including hyperlinks.
There are many issues involved (civil liberties, free speech, scientific research, fair use, monopolies as well as piracy and copyright), and it will take many years before it becomes clear what can and cannot be done with CSS (and similar) and the DMCA.
The common perception of information security software is that its purpose is to protect the confidentiality, integrity and availability of data. While this is true, it omits a major and rapidly growing additional threat: the threat of legal liability for illegal content held on corporate computer systems, or transmitted through corporate networks. Companies and company executives can now be held liable for any illegal content held on the network. This includes pornography, racist comments and sexual harassment either stored on disk or transmitted over company e-mail. Our definition of information security therefore needs to be expanded: its purpose is to protect the confidentiality, integrity, availability and legality of company data. We need an additional application: content security; an application that examines the content of messages to ensure they contain no threat.
But the threat contained in messages is not just liability for the content: we need also ensure that no sensitive corporate data can leak out of the company. Furthermore, viruses, Trojans and worms may be carried as attachments to the mail. Indeed, if the mail is in HTML format, it may contain active code malware in the body of the message itself. Finally, there have been several cases of malware exploiting vulnerabilities in the mail client. It follows, then, that we need some sophisticated software to analyse e-mail for both visible and invisible threats. This has come to be known as ‹content security›.
A formal documentation of the emergency actions to be taken in the event of damage, failure, and/or other disabling events that could occur to systems. The purpose is to ensure the availability of critical assets (resources) and facilitate the continuity of operations in an emergency.
The process of preparing a documented and organized approach for emergency response, back-up operations, and post-disaster recovery that will ensure the availability of critical system resources and facilitate the continuity of operations in an emergency.
The maintenance of normal business processes.
Business continuity has many threats: acts of God, economic downturns, war, strikes, etcetera. In information security terms, the maintenance of business continuity specifically refers to countermeasures to the traditional security threats: denial of service attacks, loss of data through viral attacks, and so on.
Data that is exchanged between an HTTP server and a web browser (the client). The client retains the cookie and can return it to the server at a later date.
In the beginning, the World Wide Web was conceived as a universal platform-independent means of disseminating information. It was a research tool. The Web is founded on the HTTP protocol. HTTP is ‹stateless‹ – which means that the server doesn’t know whether request 395 comes from the same browser as request 394, or from someone completely different.
Cookies were introduced to add ‹state› to the Web. They are small pieces of text supplied by the server, accepted by the browser, and stored on your hard disk. Since they are ‹text›, they are offer no direct threat to your system. However, the next time your browser points to that particular server, the cookie is returned to the server and ‹you› can be recognised because of the cookie.
The text that is stored on your hard disk is defined by the server. It is used to uniquely identify you from among the millions of other web users. This is where cookies can be a Good Thing. Let’s say that you have a subscription to a password-protected news service. Without the cookie, the server has no idea who you are. Every time you wish to access a different page, you will need to confirm your right to do so: you’ll need to re-enter your password. However, if your browser offers up a cookie that is recognised by the server, then it knows who you are and that you have the right to access that page.
Similarly, if you are buying your groceries over the Web, you’ll need a shopping cart stored on the server. It’s a cookie that links you to your expanding shopping cart. Without cookies, the Web would remain stateless, and such practices would be impossible.
But cookies can also be a Bad Thing. Unless you set your browser to intercept them, you don’t even know they are there. And they can stay on your disk for years. They are not a ‹security› threat in the traditional sense: they cannot harm your data nor secretly send it to some hacker site.
But they are a ‹privacy‹ threat. The reason lies in their ability to generate ‹clickstream› profiling. Consider this: I am a large online department store and you are on a shopping trip. First I drop a cookie on your disk. From then on, I can record a log of every department you visit. From this I begin to see your likes and dislikes, what you are interested in, and perhaps your price range. Next time you visit, I recognise you from the cookie and can deliver targeted advertising banners based on my analysis of the clickstream data I have already collected.
Now imagine that I am a major advertising organization that acts as a banner server. Everytime you visit a website that is displaying a banner from one of my clients, you are sent to my server to fetch the banner graphic. But when you do that, I deposit a cookie and check to see if you have any other cookies belonging to me (which would have been deposited when you visited a different web site with a different banner from a different client of mine). By analysing all of these cookies I can begin to see your likes and dislikes based on the websites you visit. And if I can persuade you to fill in a form to give me your email address (a free subscription? a free whitepaper?), then I’ve got you. I know where you are and what you like – and I can sell that information to the thousands of companies that might like to send you advertising mail; or might simply pay me more to deliver a specifically relevant banner to the website you’re visiting.
This more persistent and intrusive type of cookie is sometimes called a spyware cookie.
Computer Oracle and Password System: a suite of public domain shell scripts originated and managed by Dan Farmer that checks Unix systems for common security problems.
It includes a basic password cracker, and routines to check for suspicious changes in setuid programs, permissions on essential files, and suspicious system behavior.
Common Object Request Broker Architecture developed by the Object Management Group (OMG). To be exact, CORBA is the communications component of the Object Management Architecture (OMA), which defines other elements such as naming services, security and transaction services. CORBA is the term everybody uses to refer to OMA.
CORBA includes a Security Reference Model (SRM). Within the context of the SRM, it is possible to implement a wide range of specific security policies – as required by individual market segments and enterprises. The main elements are:
* Authentication, Identity, and Privilege
* Security Associations and Message Protection
* Object Invocation Access Control
* Security Auditing
* Security Management and Administration
An assessment of the cost of providing data protection, versus the cost of loss of, or compromise, to that data. It is a widely held principle that the cost of security should not exceed the value of the data to be protected.
Commercial Off-The-Shelf software. The term is generally used to describe governments› current policy of purchasing standard software packages rather than developing their own. The intention is purely to make cost savings.
In most cases this is an obvious policy to follow. However, it raises questions over security products. Commercial security is not usually the same as military security – and there is sometimes a suspicion that some government agencies apply pressure to have their own covert facilities built into the product. See, for example, NSAKEY. During the Cold War, it was generally accepted that both sides established foreign companies that sought to get their own software into government agencies.
Another criticism levelled at COTS is the unpredictability of market forces: the company supplying COTS products today could go out of business tomorrow, or even be purchased by a foreign company.
Actions, devices, procedures, techniques, or other measures that reduce the vulnerability of a system. In this sense, the application of a security policy is the application of a countermeasure.
In defense terms, there are also counter countermeasures, and counter counter countermeasures – whose meaning is fairly obvious.
‹Covert› simply means ‹disguised›, or ‹not open›. A covert channel is thus a hidden channel.
In security terms, it is used to describe any transfer of information that violates a computer’s built-in security systems – the transfer is by definition disguised if it defeats the security defenses.
A covert storage channel is the use of memory or a storage location to deposit information that can later be accessed by other security clearances, thus defeating the security system.
Crack is a freely available program written by Alec Muffett and designed to find standard Unix eight-character DES encrypted passwords by standard guessing techniques. It is written to be flexible, configurable and fast, and to be able to make use of several networked hosts via the Berkeley rsh program (or similar), where possible. System administrators can use this to ensure that their users are not operating with weak passwords.
Since Crack can be networked, the processing load can be spread across as many computers as are available.
A cracker is generally considered to be a hacker who has turned to the dark side; that is, a hacker who breaks into other systems with the specific purpose of causing damage or stealing data. Thus a cracker is a computer enthusiast with deep knowledge of systems who uses that knowledge for personal and selfish gain.
The term seems to have been coined in the mid-’80s by hackers who wished to distinguish themselves from those engaged in theft and vandalism. It is therefore a derogatory term. However, we should not imagine that there is an absolutely clear line between hackers and crackers: there are many hackers who once were crackers (perhaps they just grew up); just like there are many security consultants who once were hackers (who perhaps got married, had a family – or mortgage – and found ‹responsibility›).
Strictly speaking, threats to information security come from crackers, not hackers. Sadly, however, little distinction is now made between the two terms in general usage.
CRAMM is available from a number of commercial sources and consists of an assessment of assets and safeguards by an approved consultant and the subsequent production of a set of countermeasures to reduce risk.
A software upgrade that is considered to be essential by the vendor. A critical upgrade will typically fix an embarrassing bug, plug a security loophole, or thwart a newly discovered exploit.
Cross Site Scripting (XSS, cross-site malicious content)
Cross site scripting is sometimes abbreviated to ‹CSS›, but is better abbreviated to ‹XSS› because of possible confusion with Cascading Style Sheets (an HTML coding practice).
From: Secure Programming for Linux and Unix HOWTO
by David A. Wheeler
Copyright © 1999, 2000, 2001, 2002 by David A. Wheeler
v2.966, 13 July 2002
available online at: http://dwheeler.com/secure-programs/Secure-Programs-HOWTO.html (essential reading for all coders)
Some secure programs accept data from one untrusted user (the attacker) and pass that data on to a different user’s application (the victim). If the secure program doesn’t protect the victim, the victim’s application (e.g., their web browser) may then process that data in a way harmful to the victim. This is a particularly common problem for web applications using HTML or XML, where the problem goes by several names including «cross-site scripting», «malicious HTML tags», and «malicious content.» This book will call this problem «cross-site malicious content», since the problem isn’t limited to scripts or HTML, and its cross-site nature is fundamental. Note that this problem isn’t limited to web applications, but since this is a particular problem for them, the rest of this discussion will emphasize web applications. As will be shown in a moment, sometimes an attacker can cause a victim to send data from the victim to the secure program, so the secure program must protect the victim from himself.
6.15.1. Explanation of the Problem
Let’s begin with a simple example. Some web applications are designed to permit HTML tags in data input from users that will later be posted to other readers (e.g., in a guestbook or «reader comment» area). If nothing is done to prevent it, these tags can be used by malicious users to attack other users by inserting scripts, Java references (including references to hostile applets), DHTML tags, early document endings (via </HTML>), absurd font size requests, and so on. This capability can be exploited for a wide range of effects, such as exposing SSL-encrypted connections, accessing restricted web sites via the client, violating domain-based security policies, making the web page unreadable, making the web page unpleasant to use (e.g., via annoying banners and offensive material), permit privacy intrusions (e.g., by inserting a web bug to learn exactly who reads a certain page), creating denial-of-service attacks (e.g., by creating an «infinite» number of windows), and even very destructive attacks (by inserting attacks on security vulnerabilities such as scripting languages or buffer overflows in browsers). By embedding malicious FORM tags at the right place, an intruder may even be able to trick users into revealing sensitive information (by modifying the behavior of an existing form). This is by no means an exhaustive list of problems, but hopefully this is enough to convince you that this is a serious problem.
Most «discussion boards» have already discovered this problem, and most already take steps to prevent it in text intended to be part of a multiperson discussion. Unfortunately, many web application developers don’t realize that this is a much more general problem. Every data value that is sent from one user to another can potentially be a source for cross-site malicious posting, even if it’s not an «obvious» case of an area where arbitrary HTML is expected. The malicious data can even be supplied by the user himself, since the user may have been fooled into supplying the data via another site. Here’s an example (from CERT) of an HTML link that causes the user to send malicious data to another site:
In short, a web application cannot accept input (including any form data) without checking, filtering, or encoding it. You can’t even pass that data back to the same user in many cases in web applications, since another user may have surreptitiously supplied the data. Even if permitting such material won’t hurt your system, it will enable your system to be a conduit of attacks to your users. Even worse, those attacks will appear to be coming from your system.
CERT describes the problem this way in their advisory:
A web site may inadvertently include malicious HTML tags or script in a dynamically generated page based on unvalidated input from untrustworthy sources (CERT Advisory CA-2000-02, Malicious HTML Tags Embedded in Client Web Requests).
More information from CERT about this is available at http://www.cert.org/archive/pdf/ cross_site_scripting.pdf.
6.15.2. Solutions to Cross-Site Malicious Content
Fundamentally, this means that all web application output impacted by any user must be filtered (so characters that can cause this problem are removed), encoded (so the characters that can cause this problem are encoded in a way to prevent the problem), or validated (to ensure that only «safe» data gets through). This includes all output derived from input such as URL parameters, form data, cookies, database queries, CORBA ORB results, and data from users stored in files. In many cases, filtering and validation should be done at the input, but encoding can be done during either input validation or output generation. If you’re just passing the data through without analysis, it’s probably better to encode the data on input (so it won’t be forgotten). However, if your program processes the data, it can be easier to encode it on output instead. CERT recommends that filtering and encoding be done during data output; this isn’t a bad idea, but there are many cases where it makes sense to do it at input instead. The critical issue is to make sure that you cover all cases for every output, which is not an easy thing to do regardless of approach.
Crossover Error Rate (CER)
CER is a comparison metric for different biometric devices and technologies. It is the error rate at which the false acceptance rate (FAR) equals the false rejection rate (FRR). As an identification device becomes more sensitive or accurate, its FAR decreases while its FRR increases. The CER is the point at which these two rates are equal, or cross over.
In general, the lower the CER, the more accurate and reliable is the biometric device. The CER is thus a valid and useful method for comparing the performance of different biometric systems.
The scientific analysis of cryptographic systems. It will either demonstrate their strengths or reveal their weaknesses. In the latter case, cryptanalysis is essentially the same as attack – but with a different connotation. Cryptanalysis, performed by a cryptanalyst, is presumed to be for research purposes rather than personal gain.
A cryptographic coprocessor is a hardware module that includes a processor specialized for encryption and related processing. Such devices are usually used to offload the computationally heavy crypto overheads from the main processor to a dedicated number-crunching processor.
They are generally built with numerous security features to prevent such things as unauthorized retrieval of their data and reverse engineering (see tamper resistence).
A cryptographic coprocessor may provide only encryption, or it may include a degree of transaction processing. For example, a smart card coprocessor includes smart card functions in order to house them in the same protective environment as the encryption algorithm.
A hash function is an algorithm that maps a large data object (usually of a variable size) onto a small data object of fixed size. Cryptographic hash functions are used to produce a message digest which is an integral part of secure e-mail. They generate a checksum for the message, thus affirming or denying the integrity of the message itself.
Alice sends a message to Bob and attaches a message digest containing the hash result for the message. Bob computes a new hash result from the received message and compares it with the received hash result. If they are identical, Bob knows that the message received is exactly the same as the message sent; that is, it has not been altered in any way. If the two hash results are different, then Bob knows that the message from Alice has been compromised.
A cryptographic hash function has a number of specific attributes:
* it must be ‹one-way›; that is, it must be computationally infeasible to generate the input (the original message) from the hash result
* it must be collision free
Cryptographic System (Cryptosystem)
A compete working example of all the necessary components of a system designed for data encryption and decryption. It could be purely software if it’s a cryptosystem designed to run on a standard computer; or it could be a specialist hardware and software combination.
RSA has called it the science of using difficult problems to conceal information. It is the study and use of methods, usually mathematical, designed to render information unintelligible. Cryptography does not seek to hide the message, only the meaning (semantic content) of the message.
“Cryptography defined as ‹the science and study of secret writing› concerns the ways in which communications and data can be encoded to prevent disclosure of their contents through eavesdropping or message interception, using codes, ciphers, and other methods, so that only certain people can see the real message.” – Yaman Akdeniz, Cryptography & Encryption, August, 1996
If the process is reversible, cryptography is also concerned with the restoration of the semantic content of the message.
CSO – Chief Security Officer
The Chief Security Officer is the person in charge of all staff involved in maintaining security within the company. This would also include the physical security of staff and resources – which leads to a more precise title for computer security being CISO (Chief Information Security Officer).
Canadian Trusted Computer Product Evaluation Criteria: Canadian secure products criteria.
A cyberslacker is a member of staff who uses company Internet resources for non-work purposes. Viewing porn, booking holidays, making on-line domestic purchases in company time and on company resources are all forms of cyberslacking. The hidden cost of cyberslacking can be enormous.
Needless to say, many companies allow a certain amount of ‹personal› use in order to maintain a good working environment and provide a degree of control over the habit. Authorised use of company resources can hardly be classified as cyberslacking!
Cyberterrorism has long been considered a theoretical rather than actual threat. The fear is that terrorist groups, political activists and even foreign countries could launch electronic attacks against a nation’s electronic infrastructure.Since so much of society is now dependent upon computers and networks, serious disruption to this infrastructure would have a very damaging effect. And the danger is that while it would take a major power with considerable resources to develop physical weapons of mass destruction, it takes relatively little resource to develop a ‹cyber› capability (consider the disruption and cost caused by the single Code Red virus).
Received opinion, however, holds that the vast majority of Internet threat activity is generated by script kiddies who pose a great annoyance rather than a great danger. But this should be seen in relation to the reports produced by Riptech in 2002. Here, for the first time, vast numbers of actual attacks were analysed – and the findings were both surprising and disturbing. Nearly 40% of the attacks were not script kiddies blindly scanning the Internet at all, but serious, targeted attacks against specific organizations. Within these, an uncomfortably high percentage originated in countries generally considered to be ‹unfriendly› to the West in general and the USA in particular, and were aimed against aspects of the US critical infrastructure – such as utilities.
The clear inferrence is that cyberterrorism and cyberwarfare have already begun. In earnest.
This refers to the practice of siphoning data from users› PCs as they surf the ‹net. Examples include using HTML to mail documents back to the attacker, to using cookies to extract email addresses and the last 10 sites visited. Generally based on the browser’s ability to send email silently, or to control email flow from a hidden form.
Also known as ICE-T (or Internet Channel Enabled Threat). Some cyberwoozles can use online channels set up by Java applications to transfer data. Java applications are allowed to set up TCP/IP connections and interact with the desktop by default. A security subsystem (virtual machine) is used to filter an applet’s attempts to access disk-based files.
from Content Technologies› Guide to Content Security, 2nd Ed
A background process (pronounced “demon”) that carries out tasks on behalf of every user. Daemons spend most of their time sleeping until something comes along which requires their help. Unix systems have many daemons. The term probably originated in its mythological counterpart and was later rationalized into Disk And Execution MONitor.
Data Protection Act 1984/1998
The Data Protection Act (1984) was subsequently amended (1998) to take account of the EC Data Protection Directive (1995). The Directive was an attempt to harmonize the different European national laws across the whole EC.
The UK legislation creates the concept of a data subject and a data user. Data users, ie keepers of databases, must register with the Data Protection Registrar and state the purposes for which they will be using gathered data.
Data subjects, ie individuals whose names appear on the database, have rights under the act to inspect their records.
The act also contains eight “principles” that govern how data should be gathered and used, and how it should be protected from unauthorized copying and modification. These are:
* Personal data must be produced fairly and lawfully.
* Personal data must be obtained only for one or more specified and lawful purposes and not further processed in any manner incompatible with that purpose or those purposes.
* Personal data must be adequate, relevant and not excessive in relation to the purpose or purposes for which they are processed.
* Personal data must be accurate and where necessary kept up to date.
* Personal data processed for any purpose or purposes shall not be kept for longer than is necessary for that purpose or those purposes.
* Personal data must be processed in accordance with the rights of data subjects under the 1998 Act.
* Appropriate security measures must be taken against unauthorised or unlawful processing of personal data and against accidental loss or destruction of or damage to personal data.
* Personal data must not be transferred to a country or territory outside the European Economic Area unless that country or territory ensures an adequate level of protection for the rights and freedoms of data subjects in relation to the processing of personal data.
A term used to describe the archiving of data. Most often it is used to describe the forced archiving of customers› e-mail details and web browsing history by ISPs for future investigation by LEAs. Most countries are moving towards compulsory data retention on the basis that it will help the LEAs in their fight against terrorism, pedophiles and organized crime, and the maintenance of national security.The principle will be the same; only the duration for which the data is retained is likely to vary.
A wide range of civil liberties organizations petitioned the EU against allowing data retention in an open letter dated 22 May 2002 addressed to Mr Pat Cox, President of the European Parliament. This letter neatly summarizes the conflict between ‹civil liberties› and ‹national security›:
“Dear Mr. Cox…
…We believe that data retention of communications by law enforcement authorities should only be employed in exceptional cases. It should be authorised only by the judicial or other competent authorities on a case-by-case basis. When permitted, data retention must be a necessary, appropriate, proportionate and temporary measure, in accordance with the European Convention on Human Rights, the European Union Charter of Fundamental Rights, and the case law of the European Court of Human Rights…
…While the fight against terrorism is a legitimate purpose, we do not believe it can justify actions that undermine the most fundamental rights of democratic states.
Many European institutions involved in the legislative process share our position and have emphasized the importance of the decision before the European Parliament with respect to the protection of individuals› privacy.
The European data protection authorities have opposed efforts to create new data retention obligations. In a letter of 7 June 2001 to the President of the Council of the European Council, the Chairman of the Article 29 Working Group wrote that “systematic and preventive storage of EU citizens communications and related traffic data would undermine the fundamental rights to privacy, data protection, freedom of expression, liberty and presumption of innocence.”
Similarly, members of the European Parliament Committee on Citizens› Freedoms and Rights, Justice and Home Affairs have stressed that Member States should not have a general right to request whatever traffic and location data they wish without stating a specific reason why such information is needed. They have noted the risk that law enforcement authorities might use such authority to conduct broad and arbitrary ‹fishing expeditions›.
Further, European privacy commissioners have recognised that one of the best privacy safeguards is to minimize the collection of personal data where possible. They have consistently affirmed that confidentiality of communications is one of “the most important elements of the protection of the fundamental right to privacy and data protection as well as of secrecy of communications”, and that “any exception to this right and obligation should be limited to what is strictly necessary in a democratic society and clearly defined by law.” A blanket retention of all communications data for hypothetical and future criminal investigations would not respect these basic conditions.
Wide data retention powers for law enforcement authorities, especially if they were used on a routine basis and on a large part of the population, could have disastrous consequences for the most sensitive and confidential types of personal data. Vast databases now include personal data about medical conditions, racial or ethnic origins, religious or philosophical beliefs, political opinions, trade-union membership, and sexuality. New retention requirements as envisaged by the common position’s broad language will create new risks to personal privacy, political freedom, freedom of speech, and public safety. Moreover, because of the cross-border nature of Internet communications, your decision could have repercussions that will reach far beyond the European Union.
Some of you may consider that the Council’s position is not binding on EU Member States and that it should be up to the Member States› Parliaments to decide, in their own national laws, whether data retention has or not to be allowed. However, a still unofficial Framework Decision, secretly drafted by some EU Member States, would compel all the States to introduce a law providing for the retention of telecommunications traffic data. This development clearly shows the Council and EU governments› total disregard for the European Parliament’s opinion. That is why we now encourage you to decide whether the crucial issue of data retention should be a matter exclusively left in the hands of EU governments and the Council, out of reach of EU citizens› representatives.
We therefore respectfully urge you to vote for the LIBE Committee’s position on Article 15(1), and not concede any compromise language. Whether the European Parliament will permit generalized surveillance of EU citizens has become a crucial issue for the future of democratic states. It is now up to you to safeguard fundamental freedoms…”
The letter failed in its objective.
Encryption and decryption both involve the use of a mathematical algorithm for transforming the data. In addition to the data to be transformed, the algorithm has one or more inputs that are control parameters:
* a key value that varies the transformation and, in some cases,
* an initialization value that establishes the starting state of the algorithm.
DeCSS is software able to break and decrypt the CSS copy protection system. It was written in 1999 by the Norwegian Jon Johansen when he was a 15 year-old student. His purpose was to help build a DVD player for Linux. Unfortunately, Hollywood does not approve of Linux as a DVD player.
On January 9, 2002, he was charged by the Norwegian Economic Crime Unit after pressure from the Motion Picture Association of America (MPAA).
2600 Magazine has also been successfully prevented from reproducing or linking to DeCSS.
You now need a very thorough understanding of local law and MPAA pressure before you can publish, link to, describe or use DeCSS. In his DeCSS university lecture of December 6, 2000, Gregory Kesden’s preamble included, “So today, you will literally get, As much of an education as the law allows — and no more.”
Defamation Act, 1997 (UK)
“A major landmark in establishing company liability for information transferred via the Internet and other electronic means. Passed in the UK, comparable laws are under consideration for wider European legislation.
“In the context of email, the act sets out that the defendant needs to prove that he took reasonable care in relation to the publication of a statement which may be considered defamatory. In particular, the act places the burden of proof on to the defendant to show that he took ‹reasonable care›.
“Demonstrating ‹reasonable care› could therefore entail not only evolving a security policy, but also having the right content security solution to implement it. For example this could involve allowing legal disclaimers to be added to emails or facilitating key word searches on sensitive text such as ‹confidential› or ‹secret›.
“With the increasing transfer of liability for electronic circulation of information from the employee to the employer, many companies now need the capability to attach legal disclaimers to emails. In the light of recent litigation, the flexibility to adapt and customise email disclaimers is increasingly being implemented as part of demonstrating effective security policies.”
from Content Technologies› Guide to Content Security, 2nd Ed
A user ID and password contained in a system when first delivered and installed to enable initial access and confguration. By its nature, this deafult ID and password will require privileged access to the system. Failure to eliminate or remove default IDs and passwords is a common vulnerability in many installations.
If you accept delivery of any product containing a default ID and password, hardware or software, you should change the password as soon as possible, and remove or disable any associated account.
Defence Evaluation and Research Agency (DERA)
In the UK, DERA is the research arm of the MOD, now running as a semi-autonomous agency reporting direct to the Minister of Defence. It has a large number of sites in the UK (and some overseas) but information security work is largely concentrated at Malvern in Worcestershire. It is tasked by the MOD to conduct research into information security issues and undertakes work in both offensive and defensive techniques.
Until the mid-1980s it was the only government organisation with a significant information security research programme and its work on computer and network security predates that at GCHQ by at least 10 years. DERA at Malvern (then the Royal Radar Establishment and the Royal Signals and Radar Establishment) was an early participant in ARPANET and a leader of UK research and development in the defence packet switching field. In the 1980s it sought to design and develop secure computer systems for defence use but none of these achieved any significant success. It was more successful in designing packet switching encryption products and these eventually went into MOD service.
In the year 2000, the Labour Government announced that DERA would effectively be semi-privatised. DERA is one of Europe’s largest single research organisations. Among its 12,000 staff DERA employs many leading scientists and internationally acclaimed experts.
Denial of Service (DoS)
An attack that is specifically designed to prevent the normal functioning of a system, and thereby to prevent lawful access to that system and its data by its authorized users. DoS can be caused by the destruction or modification of data, by bringing down the system, or by overloading the system’s servers (flooding) to the extent that service to authorized users is delayed or prevented.
Denial of Service attacks normally stem from external sources using telecommunications (such as via the Internet); or from disaffected or disgruntled employees who bear a grudge towards the company.
Data Encryption Standard: a 64 bit block cipher with 16 iterations giving a 56 bit key.
In 1972 the NBS (National Bureau of Standards, now renamed as the National Institute of Standards and Technology – NIST) started a process to find a standard encryption algorithm. The candidate selected was DES, originally developed by IBM. In 1976 DES was formally selected as the US government-standard encryption algorithm; and was subsequently adopted by other standards bodies including ANSI and ISO. It rapidly became the most widely used symmetric encryption algorithm in the world.
But DES has never been totally trusted by security academics and theorists. The main reason is that the NSA became involved in its final structure. NBS did not have the technical resource to evaluate the algorithm and asked the NSA to help. The NSA made several alterations, including reducing the key length from 128 bits to 56 bits and changing constants in a non-linear table-lookup operation called the S-box. (It later transpired that the S-box changes were designed to make the algorithm resistent to ‹differential cryptanalysis› attacks – known to the DES development team and the NSA in the early ’70s, but not otherwise discovered until the early ’90s by Eli Biham and Adi Shamir.
It was the reduction of the key length from 128 bits to 56 bits that worried most cryptologists. Although the algorithm is sound, it has always been thought that this reduction was to allow the NSA to find keys through brute force – and this possibility became more of a likelihood when John Gilmore and the Electronic Frontier Foundation built a specialist machine capable of brute forcing DES in a few days for $250,000 in 1998. The time taken has subsequently been reduced to a few hours.
DES is clearly no longer suitable for securing sensitive data. The NSA is in the process of selecting a replacement (AES) from a number of candidates. In the meantime, it is often recommended that existing DES users move to Triple-DES (3DES).
Differential cryptanalysis is a type of attack that can be used against iterative block ciphers. It is basically a chosen plaintext attack where the difference between values (or keys) is used to gain some information about the system.
The Diffie-Hellman key exchange algorithm was introduced by Diffie and Hellman in the same 1976 paper that introduced the concept of public key cryptography to the world. It depends upon the discrete logarithm problem for its security. It can use keys of varying sizes, and provides forward secrecy. It is purely a key exchange algorithm, and cannot be used to encrypt data. It is vulnerable to man-in-the-middle attacks.
Although patented, the patent ran out in 1997. Diffie-Hellman can now be used freely.
Digital Millennium Copyright Act (DMCA)
The Digital Millennium Copyright Act was enacted into US law in 1998. It is supposedly to fulfill the requirements of the World Intellectual Property Organization (WIPO) but has the unqualified support of the major companies in the software, film and music industries.
Its purpose is to provide better protection for the rights of authors and creators, and to counter the growing threat of piracy. While the aim cannot be criticized, the execution leaves much to be desired, and it is considered by most human rights and liberty groups to be badly written and oppressive.
Most criticism is levelled at the “Circumvention of copyright protection systems” section of the U.S. Code where it says:
“No person shall manufacture, import, offer to the public provide, or otherwise traffic in any technology, product, service, device, component, or part thereof, that is primarily designed or produced for the purpose of cicumventing a technological measure that effectively controls access to a work protected under this title;…”
This is so loosely worded that it can be applied to almost anything. And it has serious implications for information security. “The DMCA has enshrined the bug secrecy paradigm into law; in most cases it is illegal to publish vulnerabilities or automatic hacking tools against copy-protection schemes. Researchers are harassed, and pressured against distributing their work. Security vulnerabilities are kept secret. And the result is a plethora of insecure systems, their owners blustering behind the law hoping that no one finds out how bad they really are,” writes Bruce Schneier (CRYPTO-GRAM, November 15, 2001).
“The DMCA affects everyone because it gives corporate publishers absolute control over what we are allowed to do with the literature and music that they publish. The DMCA allows publishers to enclose books that you purchase online and download to your computer in simple software wrappers that can force you to agree to a contract before you can access the book and prevent you from loaning the book, giving it away, printing any of it, reselling it as a used book, or even moving it to a different computer. Even if the software wrapper is easily circumvented, it is illegal to do so. All of this has happened.
“The DMCA allows publishers to override the fair use doctrines that are laid out in the U.S. Constitution. Whether the DMCA will be struck down as being unconstitutional remains to be seen, but there is considerable resistance among House and Senate leaders to do anything about this law because of the large amounts of “soft money” donations coming from the music, movie, and publishing industry lobbyists.” — Aaron Logue
The case of Dmitry Sklyarov is worth considering.
Digital Rights Management (DRM)
Digital Rights Management (DRM) is a complex and emotive subject. It refers to the control of intellectual property, the enforcement of copyright in the digital world. There is no doubt that the intention is morally correct – creators have the right to be paid for the use of their creations; authors, musicians, actors, inventors all have the right to be paid for their work and to control its use.
The difficulty is in how to enforce copyright over an electronic medium such as the Internet where copying and distribution of electronic media is so simple, inexpensive and fast. In the physical world, copying and distributing a thousand copies of a contraband book is so difficult and expensive that it is not normally an economic proposition. On the Internet, copying and distributing a thousand copies of an electronic book would cost pennies and take minutes – and is an altogether different proposition.
The problem is how to enforce copyright in the digital world; how to manage digital rights.
Intense lobbying by publishing interests in the USA was instrumental in the development of the DMCA (Digital Millenium Copyright Act). The intention is good; the execution is often considered to be oppressive. But it is the aggressive promotion of this Act by organizations such as the Recording Industry of America Association (RIAA) that worries many civil liberties organizations.
There are many software applications and organizations that seek to provide solutions to the digital rights managment conundrum. But much interest, and concern, is currently directed at the work of the Trusted Computing Platform Alliance (TCPA), and in particular towards Microsoft’s Palladium. While the companies concerned describe their work as a means of improving and enforcing computer security, most people consider it to be a means of imposing digital rights management at too high a cost to civil liberties.
Digital signatures use public key cryptosystems. A digital signature is generated by taking a mathematical summary of the document concerned (a hash value, or message digest). Provided that the hash algorithm cannot produce identical values from different messages (the one-way principle), then a hash value can only relate to one specific message. Thus, if the message is altered in any way, it will no longer produce the same hash value from the same hash algorithm.
Bob wants to send a legally binding message to Alice. Bob first uses a hash algorithm on the message to generate a hash value (or message digest or checksum). Bob then encrypts the hash value with his private key, appends it to the document, and sends it to Alice.
Alice decrypts the hash value using Bob’s public key. She then generates a new hash value using Bob’s message and the same hash algorithm. Alice compares the new hash value with the hash value sent with the message. If the two hash results are identical, then the message received is exactly the same as the message sent.
The content is thus verified. But the sender is also verified. Only Bob could have produced the encrypted version of the hash value (using his private key) that is decrypted with Bob’s public key.
The identical hash values prove that Alice has received exactly the message sent by Bob; and the use of Bob’s public and private keys proves that only Bob could have written the message. Bob has digitally signed the message.
An additional strength is that this process provides a degree of non-repudiation: since only Bob could have done the encryption, Bob cannot later deny having written the message. The main weakness, however, is that although Alice knows that Bob sent the message, she does not necessarily know that this is the Bob she thinks it is. The element of ‹trust› is still missing: but trust can be provided by the addition of digital certificates.
Many countries are now enacting laws to make such digital signatures legally binding. As yet, the enforceability of the laws has not yet been tested in the Courts. However, one likely outcome is that liability will be transferred from the banks/merchants to the buyer. At the moment, if your credit card is stolen, you are not generally liable for the cost of any fraudulent misuse. However, regardless of who conducts the transaction, if it is done with a digital signature then the owner of that digital signature will be legally liable for the outcome.
Defense Information Systems Agency. (http://www.disa.mil/)
The Defense Information Systems Agency (DISA) is a combat support agency responsible for planning, developing, fielding, operating, and supporting command, control, communications, and information systems that serve the needs of the President, Vice President, the Secretary of Defense, the Joint Chiefs of Staff, the Combatant Commanders, and the other Department of Defense (DOD) Components under all conditions of peace and war.
Disaster Recovery Plan
The procedures to be followed should a disaster (fire, flood, etc.) occur. Disaster recovery plans may cover the computer center and other aspects of normal organizational functioning. See also: Contingency Plan.
Discretionary Access Control (DAC)
A mechanism for the enforcement of user-defined file sharing. The TCSEC C1 classification and above requires that the owner or manager of each data file should be able to specify which users may access his or her data, and in what modes (e.g., read, modify, append).
Distributed Denial of Service (DDoS)
A Distributed Denial of Service (DDoS) attack uses multiple computers to launch a simultaneous coordinated DoS attack against one or more targets. The process involves compromising multiple servers and installing a single master program on one system and as many agent programs on other computers as possible. The master program is then used to instruct the agent programs to launch simulataneous DoS attacks against the target or targets. The effect is not simply a massive DoS attack, it is also an anonymous attack since the agents› host computers are not aware they are being used and the initial instigator cannot (easily) be traced.
There is no simple way to protect yourself against DDoS attacks since you are reliant upon other computers not being compromised. The best you can do is make sure that your own systems are fully patched and that you are not vulnerable to being used as a launchpad yourself. If everybody did this, then DDoS attacks would be less of a problem than they are. However, you need to do this for reasons beyond altruism: there is an increasing likelihood that you could be held liable under tort for any damage that you even unwittingly and unknowingly cause to a third party. In other words, if you do not take adequate action to prevent your system being compromised and used in a DDoS attack, you could be held liable for damages incurred by the target.
A variation on DDoS emerged during 2001 in which the target appears to be the Internet itself rather than a specific domain. This approach tends to use viral dissemination techniques to compromise multiple servers. The ‹virus› then lays dormant until a pre-determined point — at which time all of the compromised machines attempt to promulgate. If enough hosts have been compromised, the entire Internet can be affected. Code Red was perhaps the first serious example of this type of DDoS.
«These are files containing groups of often-used computer code which can be shared amongst many programs. This has several advantages: programmers who use library code do not need to keep reinventing the wheel; programs which invoke library code do not each need to include a copy of that code, making their files smaller; updates to library code can be applied in one place, rather than in many programs. In Windows systems, the commonest form of library is the Dynamic Link Library (DLL).
«There are certain disadvantages to DLLs, too. A bug in library code produces a bug in every program which uses that library. A single missing library file might prevent dozens of other programs from executing. A virus which infects a DLL automatically «infects» any program which uses that DLL.
«One virus which does this is W32/Ska-Happy99, which infects the file WSOCK32.DLL, allowing the virus to intercept all future attempts to send email (independently of the email software you are using). The virus then emails a copy of itself along with every email you send.»
from Sophos› V-Files
DMZ (or Perimeter Network)
Sometimes called a DMZ (de-militarized zone), a perimeter network is an additional network between the protected network and the unprotected network, providing an additional layer of security.Servers that are necessarily exposed to the Internet (such as web servers, mail servers) are best placed in the DMZ and protected by a firewall or firewalls. Further firewalls separate the DMZ from the trusted network, or corporate LAN.
The term ‹DMZ› comes from the zone separating North and South Korea.
Loosely, a document file; specifically a Microsoft Word document file.
“Microsoft Word document files usually contain just document data. However, they may also contain programs (called “macros”) written in a high-level programming language that is part of Word itself. There are various versions of this macro language, including Word Basic in Word 95, Visual Basic for Applications 5 (VBA5) in Word 97, and Visual Basic for Applications 6 in Word 2000. All of these languages are designed to be easy to learn, and all are powerful enough to be used to write viruses.
“This means that when someone sends you a DOC file by email, it may be more than just a data file. It may also contain an embedded macro program which will automatically be fired up when you double-click on the DOC attachment in the email. DOC files are commonly exchanged, and this has allowed Word viruses to become extremely common. Seven out of ten viruses in the March 1999 Sophos Top Ten were able to infect DOC files.
“Anti-virus software can stop you accidentally opening infected DOC attachments sent by email, and you should certainly use it for protection. However, it is also worth trying to avoid sending DOC files unless it is strictly necessary. Sometimes, people send out DOCs containing short messages that could more efficiently have been sent as simple text emails (see ASCII). At other times, they send out more complex documents, but still do not need all the formatting bells and whistles offered by DOC files. In these cases, it would have been safer and more efficient to use Rich Text Format (see RTF).
from Sophos› V-Files
It is worth noting that apparent RTF files can also be dangerous. A malicious user could create a macro within a DOC and save it as a DOC but with an RTF extension. This would appear to the average user to be a safe RTF file while it is really an infected DOC file. Word, however, is not fooled by the extension – it recognises it as a DOC, opens it as a DOC, and gets infected by the virus.
DOCs can also ‹leak› sensitive data. Although text may have been deleted by the author, remnants of the deleted text can stay within the file and be recovered later. Content security products can usually solve this problem.
It is also the highest subdivision of the Internet; usually by country or type of organization (edu: educational; com: commercial; org: organization; mil: military; and so on). The domain name (eg, itsecurity.com) includes the next subdivision so that it makes a complete and unique identifier.
The identifier used on the Internet. It is a series of case-insensitive labels separated by dots, and is used to identify:
domain: .com, .net, .mil, .org etcetera
host names: itsecurity.com
In common use, the domain name in the above example would be ‹itsecurity.com›.
Originally, a dongle was an EPROM device that connected to a PC’s parallel port and contained a unique authentication code. Its associated software application would periodically check the dongle for existence of the correct code and either wouldn’t start or would stop if the code was not returned. It was thus an early copy protection device – although it didn’t prevent the user copying the software, it was effectively impossible to copy the dongle.
By extension, ‹dongle› has come to mean any device (including software only) that serves the same purpose. The original devices were never very popular because they originally tied up the parallel port (later ones could be daisy chained and still provide normal parallel port functionality); but more importantly because copy protection went out of fashion.
It will be interesting to see if they become more successful now that copy protection is back in vogue.
Drive-by hacking is a term used to describe hacking or cracking the target’s wireless LAN (usually 802.11b) while outside of the target’s offices (perhaps in a vehicle — hence ‹drive-by› — or behind some shrubs in the grounds.
Drive-by hacking is possible because of weaknesses in 802.11b security (see WiFi, WEP). There have been numerous surveys demonstrating a high proportion of vulnerable WLANs found while simply touring city business centers in a taxi.
In August 2002, the Sunday Mirror (a UK tabloid newspaper) claimed “TOP-secret files can be downloaded from the Prime Minister’s computers in Downing Street… Sunday Mirror investigators were shown how to access the system used by MPs at their new £234million offices at Portcullis House, while parked at traffic lights 50 yards away.” This would have been a prime example of drive-by hacking — were it not for the slight detail that the security consultant doing the ‹showing› later “described the article as ‹riddled with inaccuracies› and stated categorically that he had not attempted, or claimed to be able, to hack into Whitehall computers.”
(Digital Signature Algorithm/Digital Signature Standard)
The Digital Signature Algorithm is a public key system for generating digital signatures. The key length can be between 512 and 1024-bits, although 1024-bits is recommended. It is based on the discrete logarithm problem, and is effectively for authentication only.
Strictly speaking, the Digital Signature Standard is a NIST standard (FP186) specifying the use of the DSA algorithm for digital signatures. It was published in 1994.
The two terms are, however, frequently used interchangeably.
DSA has been criticised as lacking key exchange capabilities, lacking an underlying cryptosystem, selected by NIST secretively, and potentially influenced by the NSA.
Department of Trade and Industry (UK).
The DTI has a difficult role as far as cryptography is concerned. It has historically and understandably been responsible for cryptographic export controls, and represents the UK in activities such as the Wassenaar Arrangement where international cryptography controls are agreed.
But any sort of export control is inevitably bad for trade. The DTI has therefore had the unenviable task of rationalising law enforcement’s desire for strict cryptographic control with its own desire to maximise the trading opportunities for UK industry.
This anomaly came to a head during 1999. The DTI’s e-commerce bill was designed to be an e-commerce enabler and to help fulfil the Labour Government’s promise to make the UK the world center for e-commerce. But under pressure from the Home Office, this bill included attempts to make it easy for law enforcement to gain access to the plaintext of encrypted messages.
At first this included the concept of compulsory key escrow using trusted third parties (trusted, that is, by the Government rather than necessarily by industry). Government could then obtain decryption keys from the TTPs. The problem, of course, is that there is a direct and irreconcilable contradiction between this sort of restriction and maximising trade.
The DTI dithered. In the meantime, the Home Office, whose brief is local law and order rather than international trade, had no such qualms or problems. The Home Office hijacked the cryptographic elements of the e-commerce bill and included them in its own Regulation of Investigatory Powers bill (RIP).
Under Home Secretary Jack Straw the RIP bill was forced through parliament with indecent haste, and became the Regulation of Investigatory Powers Act (RIPA) by the summer of 2000.
Dual control is a security procedure that requires two people (or possibly two processes or two devices) to cooperate in order to gain access to a system resource (data, files, devices). In its simplest form it could be a door that requires to separate keys, where each key is held by a different person. See separation of duties.
A computer system with two network interfaces which is used to provide a secure point of control between two differing IP networks with two differing IP addresses. It is a gateway because it acts as a conduit for some or all traffic between the two networks. Examples thus include proxy servers and firewalls. The premise is that the connections coming in on one network (are checked, verified etc.) before being allowed to pass out of the other interface. Typically one network is more trusted (e.g. an internal LAN) and the other is less trusted (e.g. the Internet).
Other systems may be dual homed, for example one does see web servers which have a network interface that presents itself to the Internet (through a firewall) to accept HTTP requests and a second interface that connects to a network for communication with a database server.
A silent ‹panic alarm› built into some access control systems. It would normally take the form of a secret code that could be added either before or at the end of an ID or password, and would cause a silent alarm to be sent to the system administrator or other official. It would indicate that the user is under some form of duress, and is perhaps being forced to log on or gain entry against his or her free will.
Dynamic (Stateful) Packet Filter
A dynamic packet filter firewall is a fourth-generation firewall technology. The terminology is not yet settled, nor is there a clear definition. Basically, a dynamic packet filter can monitor the state of active connections and use the information obtained to determine which network packets to allow through the firewall. By recording session information such as IP address and port numbers, a dynamic packet filter combines the best of application level gateways with simple packet filters: secure and fast.
This is particularly relevant to UDP packets.
In issue 24 of phrack, in February 1989, the editors published an article on the computer systems which handled the 911 emergency telephone system in the united states. The document, known as e-911, had been obtained by a hacker who gained access to a computer system owned by a US telephone company.
Five months after publication, the editors of phrack were raided by the US secret service and at the subsequent legal case, the prosecution argued that the highly confidential document was worth $80,000 and that its publication had done untold damage to the telephone company. phrack’s defence lawyers, however, proved that the document was available publicly from the telephone company for around $13.
The case was then dropped.
Current and previous issues of phrack can be obtained via the web.
Electronic mail. A message sent or retrieved electronically. The term is also used as a verb: “to e-mail someone” is to send that person a message by electronic means. The software used to send and receive e-mail is called e-mail client software.
Messages generally comprise a header (containing address information for both the sender and the addressee, a subject line etc), and the body (containing the content of the message). E-mail conforming to the MIME specification can also include ‹attachments› comprising binary files that are sent with the message.
Originally, the body of a message was simple ASCII, and could therefore pose little threat to security. More recently, however, e-mail clients have begun to use HTML. The advantage of HTML is that it can improve the visual appearance of the message. The disadvantage is that it can include and hide security threats. The biggest threat, however, resides in any binary attachment that could be infected with a virus. Attached MS Word documents are the most common form of spreading macro viruses.
There are, however, other threats that can be contained within the text of even an ASCII message. E-mail has been used to pass ‹jokes› around the company. Some can be in bad taste, offensive or simply illegal. Messages can contain racist and sexist comments – and can leak confidential information that should be kept private. Some of these ‹threats› have legal implications, others have commercial implications. Either way, a company should have a strict e-mail policy in force, detailing exactly how staff should use the system, and what can and cannot be included. This policy should then be enforced by content security software.
Messages containing confidential data should always be encrypted to maintain confidentiality. They should include a digital signature to authenticate the sender, prove that the message has not been tampered with (that is, demonstrate its integrity), and prevent the sender from denying having sent it (provide for non-repudiation).
EAR (Export Administration Regulations)
Ever since the rules for the export of cryptographic products from the USA were transferred from the State Department (ITAR – International Traffic in Arms Regulation) to the Department of Commerce (EAR – Export Administration Regulations), there has been a gradual relaxation of the rules. This is not to say that it is easy to export strong cryptography – only that it is possible subject to certain continuing restrictions.
It must be stressed that these rules are continually changing. However, Bert-Jaap Koops› seminal Crypto Law Survey (Version 18.4, January 2001) described the current status as:
“After the President’s Export Council Subcommittee on Encryption advised in “Liberalization 2000” to ease the export controls, the goverment announced further relaxation of export controls on 16 September 1999. (At the same time as this announcement, the goverment announced the Cyberspace Electronic Security Act (CESA) 1999 to meet the perceived effects of the export changes on law enforcement and national security.) The changes were to be implemented by 15 December 1999, but were subsequently postponed. The new regulations were finally published on 12 January 2000 (the press release is less specific but much more readable). The major components of the updated policy are the following.
* Any crypto of any key length can be exported under a license exception, after a technical review, to non-government end users in any country exceptthe seven “terrorist countries”. Exports to governments can be approved under a license.
* Retail crypto (i.e., crypto which does not require substantial support and is sold in tangible form through retail outlets, or which has been specifically designed for individual consumer use) of any key length can, after a technical review, be exported to any recipient in non-terrorist countries.
* Unrestricted crypto source code (like most “open source” software) and publicly available commercial source code (like “community source” code) can be exported to any end-user under a license exception without a technical review. BXA must be given a copy or the URL of the source code. All other source code can be exported under license exception after a technial review to any non-government end-user. One may not, however, knowlingly export source code to a terrorist country, although source code may be posted on the WWW for downloading without the poster having to check whether it is downloaded from a terrorist country.
* Any crypto can be (re)exported to foreign subsidiaries of US firms without a technical review. Foreign nationals working in the US no longer require an export license to work for US firms on encryption.
* The regulations implement the December 1998 Wassenaar changes (notably, export of 56-bits and 64-bits (for mass-market products) crypto to non-terrorist countries).
* Post-export reporting is required for exporting certain products above 64 bits to non-US entities.
Since 19 October 2000, a further liberalization of export controls is effective, triggered by changes in the EU export regulations (see the Federal Register Vol. 65, No. 203, pp. 62600-10, available at BXA). The liberalization was announced on 17 July 2000. A license exception is introduced for export of any crypto product to any end user (so, the distinction between government and non-government end users is dropped) in the 15 EU countries, Australia, Czech Republic, Hungary, Japan, New Zealand, Norway, Poland, and Switzerland. Also, US exporters can ship products immediately after filing a commodity classification request, without waiting for the technical-review results or the previously used 30-day delay period.”
Echelon is the code name for a global surveillance system operated primarily by the NSA but including involvment from the secret services of the UK, Canada, Australia and New Zealand. It has to be said that the various governments concerned invariably deny or play down its existence.
Nevertheless, there is substantial circumstantial evidence to confirm its existence. It is supposedly based on the interception of satellite telecommunications. Vast computers then scan all interceptions looking for key words and phrases – such as, we may surmise, ‹kill the President›, ‹bomb›, ‹nerve gas› and so on.
70. Project P-415/ECHELON made heavy use of NSA and GCHQ’s global Internet-like communication network to enable remote intelligence customers to task computers at each collection site, and receive the results automatically. The key component of the system are local «Dictionary» computers, which store an extensive database on specified targets, including names, topics of interest, addresses, telephone numbers and other selection criteria. Incoming messages are compared to these criteria; if a match is found, the raw intelligence is forwarded automatically. Dictionary computers are tasked with many thousands of different collection requirements, described as «numbers» (four digit codes).
STOA’s Interception Capabilities 2000, prepared by Duncan Campbell
Perhaps the most extensive discussion of Echelon appeared in the book Secret Power by Nicky Hager – a book that is now difficult to obtain. More information can be found in the Interception Capabilities 2000 report by Duncan Campbell. Further information can be found at:
A CBS 60 Minutes programme shown on Sunday evening, 27 February 2001, contained:
KROFT: (Voiceover) We can’t see them, but the air around us is filled with invisible electronic signals, everything from cell phone conversations to fax transmissions to ATM transfers. What most people don’t realize is that virtually every signal radiated across the electromagnetic spectrum is being collected and analyzed. How much of the world is covered by them?
Mr. MIKE FROST (Former Spy): The entire world, the whole planet – covers everything. Echelon covers everything that’s radiated worldwide at any given instant.
KROFT: Every square inch is covered.
Mr. FROST: Every square inch is covered.
“The EICAR Standard Anti-Virus Test File
This is a simple text file (see ASCII) that can also act as a program. It consists of one line of printable characters; if saved into a file called EICAR.COM, it can actually be executed. It prints the message:
Most anti-virus products detect this file as if it were a virus. This provides a safe and simple way of testing the installation and behaviour of your anti-virus software without needing to use a real virus. Using a real virus for testing on your corporate network is rather like setting fire to your wastepaper basket to test the smoke alarm — an unnecessary risk.
To make your own EICAR test file, create a text file called EICAR.COM containing a single line that looks like this:
Note that the ‹O› in the third character position is the letter ‹oh›, not the digit ‹zero›. If you have typed (or pasted) the text correctly, Sophos Anti-Virus will tell you the file contains ‹EICAR-AV.-Test›.»
from Sophos› V-Files
Electronic commerce is business conducted by the exchange of information via electronic media rather than paper media. It thus includes the use of electronic data interchange, bulletin boards, electronic mail, facsimile transmissions and other paperless means of communication. In general, it tends to mean ‹business conducted across the Internet›.
The term is often shortened to ‹e-commerce›. Mobile commerce (m-commerce) is a subset of e-commerce, referring to business conducted via mobile electronic devices such as mobile telephones.
Electronic commerce has been slower in adoption than expected. Few analysts doubt, however, that it will revolutionize business over the next few years. The projections for the amount of business that will be conducted electronically is continually being revised upwards.
Concerns over security (theft of credit card numbers, large scale fraud etcetera) are the main causes for the delay. Governments have been passing laws to facilitate its growth and standardize its use (particularly in areas such as ‹digital signatures‹); and software vendors have developed a raft of systems to make it easier to implement and more secure to operate. PKI software is considered to be the foundation for secure electronic commerce.
Elliptic Curve Cryptography (ECC)
Elliptic Curve Cryptography is a relatively new branch of asymmetric cryptography based on the mathematics of polynomial equations and used for key exchange.
In theory, an efficient implementation of ECC can produce smaller keys in a shorter time than Diffie-Hellman keys offering the same security. ECC keys are also considered to be more resistant to brute force attacks than any other form of asymmetric keys.
However, ECC is new; and the strengths and weaknesses are yet to be fully explored and understood.
An email client is the software you use to create emails on your own PC, and to display emails that you receive. The term is usually used to differentiate this type of email from webmail, which uses the services of a webserver such as Yahoo or MSN. The most common email clients are Microsoft Outlook Express and Eudora.
In most cases, the email client itself is fairly secure – it is the way that it is used that can introduce problems. These problems include
- HTML mail
It is safest not to allow HTML mail. It looks prettier than simple text, but can introduce problems. For example, a picture is not embedded in the actual HTML file – all that is embedded is the link to where the graphic resides. The browser recognises the link and goes to the source site to grab the graphic – the problem is that it might grab more than you expect – or want…
Attachments to email is the most common method of distributing files across the Internet, whether it’s the distribution of corproate documents, a letter to granny, or a holiday snap. But attachments are also the most common method of spreading worms and viruses across the Internet – whether macro viruses within a document or spreadsheet, or the virus/worm itself simply disguised as something else.
- The Address Book
Many viruses raid the email client’s address book. They take two addresses and forge an email: one address becomes the forged ‹sender› while the other is the recipient. The virus then mails itself to the recipient and repeats the process with different combinations from the address book. Since both addresses come from the same address book, it is likely that the people concerned may know each other. The recipient recognises the sender and trusts the email. Opening the attachment spreads the infection.
Email hoaxes are more prevalent than you might think. They differ from viruses in that they are not self-replicating, do not attach to another file, and do not carry a payload. Instead, they try to spread themselves by social engineering; that is, by persuading you to do something unnecessary and or irrelevant, and especially to forward the mail to all your colleagues. A typical hoax might seek to persuade you that there is a new ‹killer virus› for which there is no cure – it is set to destroy your motherboard at 12:00 pm tomorrow – make sure you turn all your computers off at that time – and warn all your friends to do the same…
Encoding involves changing data from eight-bit to seven-bit to enable it to travel over the SMTP channel, which was historically a seven-bit only channel. This state of affairs arose because initial designs only concerned themselves with text transfers using restricted ANSI character sets, which failed to consider non-English language characters or file-based attachments.
Therefore, non-seven-bit data has to be encoded into a seven-bit data stream. UUE, which has many variants, and BINHEX are fairly old techniques, and MIME is the current favourite encoding format. If you are looking at content solutions, you should ensure that multi-part UUE and XXE variants are supported.
Encoding is found in binary streams, generally to denote how the data should be treated at the remote end. In HTTP for example, headers will declare the data type of the following transmission. This will tell the browser how to render the data, such as plain text or HTML.
from Content Technologies› Guide to Content Security, 2nd Ed
Encryption is the scrambling of data so that it becomes very difficult to unscramble and interpret. Scrambled data is called ciphertext. Unscrambled data is called plaintext. Unscrambling ciphertext back to the original plaintext is called decryption.
Data encryption is inevitably performed by the use of a cryptographic algorithm and a key. The algorithm uses the key to scramble and unscramble the data. Ideally, the algorithm should be made public (so that it can be scrutinized and analyzed by the cryptographic community), while the key remains secret.
The key is fundamental to the strength of the encryption. You need the one correct key before you can decrypt the ciphertext. It follows, then, that the longer is the key, the greater is the range of possible values it could have. The range of possible values is called the keyspace. The greater the keyspace, the more difficult it is for an unauthorized person to discover the correct key.
Encryption cannot make unauthorized decryption impossible; it can merely make it improbable. With unlimited processing capacity and unlimited time available, all cryptosystems could be broken. The purpose of encryption is to make it as unlikely as possible that a ciphertext could be broken within the period of time during which the contents should remain secret.
There is an arbitrary and subjective distinction between weak and strong encryption. Strong encryption implies that it would effectively be impossible to find the key within the effective lifetime of the secret. Weak encryption implies that the key could be found with a realistic amount of processing capacity and a reasonable amount of time. The difference between weak encryption and strong encryption is thus effectively a question of processing power. As computers become more powerful and less expensive, what is now ‹strong› encryption will inevitably become ‹weak› encryption. Until recently, any key length above 56 bits was generally considered to be ‹strong› encryption. (Note, however, that 56 bit DES was cracked during 1998.) 56 bit encryption could now be considered weak (see also, snake oil).
Many governments around the world view strong encryption with concern. The argument is that the availability of strong encryption limits their ability to monitor the messages of suspected terrorists, drug traffickers and paedophiles. There is consequently a growing movement for governments to demand a system of key escrow or key retrieval. This would ban the use of strong encryption unless either a back door is made available to Law Enforcement, or a decryption key is lodged with a Trusted Third Party (TTP). This, needless to say, has led to considerable debate between government and government agencies on the one side, and academe, civil liberty groups and the greater part of industry on the other. (See RIPA and Carnivore)
End-to-end implies ‹from source to destination›. Thus, end-to-end encryption involves encryption at the source that is not decrypted, regardless of any other systems or networks the transmissions passes through, until it reaches its destination.
The Enigma machine was an encryption/decryption device designed in Germany before, and used by the Axis forces during, the Second World War. The Germans considered it to be unbreakable.
The device looked a bit like an old-fashioned typewriter, but with lamps and rotors that were swapped arond each day. It is generally considered that breaking the Enigma code at Bletchley Park in the UK had a major effect on shortening the duration of the war. However, there is also much evidence to suggest that Polish cryptographers had effectively broken the code even before the outbreak of hostilities.
The algorithm itself was considered to be so strong that it was included within Unix as late as the 1970s. However, an early version of The Site Security Handbook (now RFC 2196), wrote in the 1990s:
Similar to the DES command, the UNIX “crypt” command allows a user to encrypt data. Unfortunately, the algorithm used by “crypt” is very insecure (based on the World War II “Enigma” device), and files encrypted with this command can be decrypted easily in a matter of a few hours. Generally, use of the “crypt” command should be avoided for any but the most trivial encryption tasks.
An entity is an active element of a system. It could be a person, a group of persons, a subsystem, or a process.
The deliberate planting of apparent security weaknesses with the specific purpose of detecting those who are likely to exploit any genuine weaknesses.
In thermodynamics, entropy is the degree of unavailability of a system’s thermal energy, often more simply interpreted as the degree of disorder or randomness.
In information theory it is the extent of uncertainty about the true value of a random variable – in other words, simplistically – it is the degree of randomness in a variable. Mathematically, and especially in terms of measuring the entropy, it is not so simple. But if the likelihood of an event is a certainty, it has zero entropy. If you flip a coin, the result could be either heads or tails. In this event, the likelihood is one or the other, and can be expressed as having one bit of entropy. The higher the entropy value, the greater the degree of genuine randomness.
Genuine random numbers are essential for cryptography – but the generation of truly random numbers is difficult (see random number generator).
Key escrow involves lodging the decryption key with a Trusted Third Party (TTP). It is an emotive subject because of governments› repeated attempts and known desire to enforce a general requirement not merely for key escrow, but also for the mandatory release of that key to Law Enforcement agencies. This raises civil liberties issues as well as security issues.
Nevertheless, key escrow is a concept that will need to be considered by many organizations. If strong encryption is used, and the relevant key is lost either through accident or misadventure, theft or employee disaffection, then the organization concerned will lose its data. Lodging the key with a TTP means that it can be recovered in extremis.
The security issues center around the security of the TTP. In general, any addition to the chain of trust is the addition of a weak link in the chain. Use of a TTP introduces new threats that would not otherwise exist: TTP staff could be duped, bribed or threatened; TTP systems could be hacked.
Very similar to a Penetration Tester. The ethical hacker is an individual who is employed and can be trusted to undertake an attempt to penetrate networks and/or computer systems using the same methods as a Hacker. Typically an Ethical Hacker will go further than a penetration tester by using such techniques as Social Engineering to obtain information from staff to facilitate a successful penetration.
A formal methodology and set of procedures used to measure the degree of trust that can be placed in a system.
EXE files contain programs that you can run directly. Usually, you do this by double-clicking on the program’s icon, by clicking on a shortcut on the desktop or start menu, or by entering the name of the program at a command prompt. Programs can also be executed from other programs, or by scripts such as batch files or Visual Basic Script files. Files with the extension COM are also directly executable under Windows.
Since programs are designed to be executed and to take control of the computer, it comes as no surprise to find that the vast majority of known viruses (over 80% at April 1999) infect program files. However, real-world infections (known as «in-the-wild» infections) by program-infecting viruses are much less common (less than 15% of reports at April 1999).
This is probably because people exchange programs far less frequently than they exchange document files (see DOC) and spreadsheets (see XLS), so that infected programs are relatively less likely to be passed from computer to computer.
However, experience suggests that people are easily persuaded to execute unknown programs if they are sent EXEs by email. You are advised to reject unsolicited programs (see Trojan), even if you seem to know the source, because it is rarely necessary to accept them.
from Sophos› V-Files
The methodology for enacting an attack against a particular vulnerability.
The act of taking advantage of a vulnerability; that is, exploiting the weakness.
Exploits are frequently published on the Internet; and not always by Black Hat crackers. The ethics are debatable, but many exploit publishers claim that this is the only way to force software vendors to develop more secure software, and produce fixes for existing software that has weaknesses.
If you learn of a vulnerability and develop an exploit for it, it is considered good practice to notify the software vendor and give the developer reasonable time to produce a fix before publishing the exploit.
See also: Full disclosure
Factoring is the process of obtaining the factors of a given number; that is, of finding the integers that are multiplied together to produce the resulting number. Thus, the factors of 35 are the two prime numbers 7 and 5.
At this level, this is not difficult – but that’s primarily because we already know the answer. Multiplying two prime numbers is simple – but the reverse, factoring the product to find the original primes, is not. And, of course, the bigger the two original prime numbers, the more difficult it is to factor the product.
This problem lies at the heart of several public key ciphers, including RSA. There is no known easy method to factor a large number. The possibility that one can be developed is considered to be remote. However, if one were to be found, it would then become possible to find the RSA private key, and decrypt private messages and forge digital signatures.
The design principle that requires that the failure of part of a system will not result in the failure of the rest of the system – particularly in terms of access to the rest of the system.
A method of system termination that attempts to leave the system processes and components in a secure condition in the event of a system failure
False Acceptance Rate (FAR)
A term that is particularly relevant to biometric access control systems. The false acceptance rate (FAR) is a measure of the likelihood that the access system will wrongly accept an access attempt; that is, will allow the access attempt from an unauthorized user.
A false negative is the term applied to a failure in an alerting system – most commonly in an anti-virus product or intrusion detection system. It occurs when a virus or intrusion condition exists, but is ‹allowed› (or ignored or missed) by the alerting system.
Compare with false positive.
A false positive is a term applied to a failure in an alerting system – most commonly in an anti-virus product or intrusion detection system. It occurs when a virus or intrusion condition is incorrectly reported; that is, the alerting systems reports a virus or intrusion condition that does not exist. Too many false positives can be very intrusive.
Compare with False negative.
False Rejection Rate (FRR)
A term that is particularly relevant to biometric access control systems. The false rejection rate (FRR) is a measure of the likelihood that the access system will wrongly reject an access attempt; that is, will refuse the access attempt from an authorized user.
File Allocation Table.
The ‹index› used by DOS and current Windows versions to keep track of files on a disk. DOS uses a 16-bit FAT, but more recent versions of Windows 95 and Windows 98 and ME use the more efficient 32-bit FAT.
Corruption of the FAT, caused by physical damage to a disk, faulty software or a virus, can be fixed by someone with the necessary skills and software tools.
Federal Criteria for Information Technology Security (FC)
US draft security criteria for trusted systems. This was an attempt by the US to develop a trusted system evaluation criteria to replace the US TCSEC and become an element in a suite of Information Processing Standards under the management of NIST. While it is still possible to find the original draft documents published in December 1992, the project has effectively been shelved.
FC-FIPS would have been an improvement on TCSEC, but was overtaken by the international agreement to develop common criteria stemming from a meeting in Brussels, February 1993.
A ‹filter› is a porous device for removing impurities. This is an accurate description for the various types of filter found within the information security industry, and especially
* URL filters
* content filters
An URL filter is a device designed to prevent access to specified websites. These are normally pornographic, racist, hate sites, seditious or other dubious content sites. It has to be said, however, that the whole concept is fraught with problems. This is censorship – but who is the censor? Who decides, and on what grounds, that a partcular website should be banned?
A content filter seeks to recognise pre-defined content within files, especially e-mails, and take steps based on that content. It could, for example, be set to recognise illegal content, and report it to a supervisor.
A service that provides information about a particular user or all users logged on to the system, or a remote system. This typically shows the full name, last login time, idle time, terminal line, and terminal location (where applicable). It may also display a plan file left by the user.
Its most common use is to see if a particular person has an account on another Internet site; that is, as a tool for locating people. However, it provides valuable information for intruders since it gives data such as valid usernames, which valid usernames are not currently in use, and whether accounts likely to notice their activity are currently logged on. It is therefore a good idea to place restrictions on the finger service.
Federal Information Processing Standards. FIPS Publications are issued by NIST as technical guidelines for US government procurement of computer systems and services.
Some of the more infosec relevant FIPS publications include:
* FIPS Pub 31: U.S. Department of Commerce, “Guidelines for Automatic Data Processing Physical Security and Risk Management”.
* FIPS Pub 46-2: “Data Encryption Standard (DES)”.
* FIPS Pub 81: “DES Modes of Operation”.
* FIPS Pub 102: “Guideline for Computer Security Certification and Accreditation”.
* FIPS Pub 113: “Computer Data Authentication”.
* FIPS Pub 140-1: “Security Requirements for Cryptographic Modules”.
* FIPS Pub 151-2: “Portable Operating System Interface (POSIX)–System Application Program Interface [C Language]”.
* FIPS Pub 180-1: “Secure Hash Standard”.
* FIPS Pub 185: “Escrowed Encryption Standard”.
* FIPS Pub 186: “Digital Signature Standard (DSS)”.
* FIPS Pub 188: “Standard Security Label for Information Transfer”.
A firewall is a combination of software and hardware components that controls the traffic that flows between a secure network (usually a company LAN) and an insecure network (usually, but not necessarily, the Internet), using rules defined by the system administrator. The firewall sits at the connection between the two networks. All traffic, from one network to the other, passes through the firewall. The firewall either allows, rejects or drops the traffic based on the security policy (rules programmed into the firewall).
Rejecting traffic sends a message back to the sender explaining that the message has been rejected. Dropping traffic simply ignores the packets concerned. The latter approach lengthens the time it would take for a network scan to complete since the sender is made to wait for the communication to time out.
The secure trusted network is said to be ‹inside› the firewall; the insecure untrusted network is said to be ‹outside› the firewall.
Firewalls are an essential part of Internet security. However, it is important to remember that they cannot provide complete security. They can do nothing to guard against insider threats, and they can do nothing to protect against threats that bypass the firewall (such as individual dialup connections to the Internet).
Furthermore, the firewall has to be configured to allow some traffic through – otherwise nobody on the inside could access the Internet nor send Internet e-mail. The fact that it allows some traffic through provides a channel that could potentially be exploited – and could certainly carry viruses.
So, while a firewall is essential, it is essential that it be only a part of an overall security policy.
One type of firewall involves routers that incorporate packet filtering. These are known as screening routers. Packets are intercepted and examined. Those packets not conforming to predefined rules are rejected, while the others were allowed to pass. Advantages of this approach are that it is efficient, widely available, and a single screening router can help to protect an entire network. Disadvantages are that packet filtering routers are complex to configure, difficult to test, and reduce router performance.
An ‹improvement› to this approach is the addition of ‹stateful inspection›. In this case the firewall ‹remembers› conversations between the two systems, and it is only necessary to fully examine only the first packet of a conversation. This improves performance.
Another approach is to provide proxy services – also known as application-level gateways. A proxy service requires two components: a proxy server and a proxy client (which in some configurations can be normal client programs). A client inside the firewall will talk to the proxy server which ‹pretends› to be the real (Internet) server outside of the firewall. In this way it can evaluate the traffic without requiring a direct connection between the secure and insecure networks.
Advantages of this approach include improved logging capabilities, ability to perform intelligent filtering, easier user authentication, capabilities for caching, and potential protection against malformed packets. Disadvantages include the need for new proxies for new services (which may not be immediately available); and the probable need to modify clients, applications or procedures.
History of the Firewall
The first generation of firewall architectures appeared around 1985 and came out of Cisco’s IOS software division. These are called packet filter firewalls. The first paper describing the screening process used by packet filter firewalls did not appear until 1988, when Jeff Mogul from Digital Equipment Corporation published his studies.
Around 1989-1990, Dave Presotto and Howard Trickey of AT&T Bell Labs pioneered the second generation of firewall architectures with research in circuit relays: circuit level firewalls. But they neither published any papers describing this architecture nor released a product based upon their work.
The third generation of firewall architectures was independently researched and developed by several people during the late 1980s and early 1990s – in particular, Gene Spafford of Purdue University, Bill Cheswick of AT&T Bell Laboratories, and Marcus Ranum: application layer firewalls. Ranum’s work received the most attention in 1991 and took the form of bastion hosts running proxy services. This evolved into the first commercial product: Digital Equipment Corporation’s SEAL product. Around 1991, Bill Cheswick and Steve Bellovin began researching dynamic packet filtering. In 1992, Bob Braden and Annette DeSchon at USC’s Information Sciences Institute began independently researching dynamic packet filter firewalls for a system that they called «Visas.» Check Point Software released the first commercial product based on this fourth generation architecture in 1994. During 1996, Scott Wiegel, Chief Scientist at Global Internet Software Group, Inc., began laying out the plans for the fifth generation firewall architecture: the Kernel Proxy architecture. Cisco Centri Firewall, released in 1997, is the first commercial product based on this architecture.
Network Address Translation (NAT)
A denial of service attack (DoS) the seeks to flood an entity with with more input than that entity can correctly process. At best, this can overload the system so that legitimate users are not properly served, while at worst the entity might fail or behave in an unexpected manner.
Footprinting is the cyber equivalent of ‹casing the joint› — the systematic gathering of target information. Serious crackers will seek to build up a footprint, or security profile of their target. Information sought would typically include domain names, network blocks, IP addresses, system architecture, access control mechanisms and so on.
Forensics is all about obtaining the proof of a misdemeanour. Computer forensics is about obtaining the proof of an illegal misuse of computers in a way that could lead to the prosecution of the culprit. Most computer forensics are built on the assumption of adequate audit trails. Provided the logs exists, the main steps in computer forensics are then:
* Detection of potential abuse
* Protection of the proof
* Adducing qualified evidence
* Presentation of the evidence
“…if you think you may have a problem it is better to act quickly, computer evidence is volatile and can be destroyed in a blink. It is also better to know for sure than to ignore possible consequences. If you are unfortunate to uncover a potential problem, it may be prudent to seek confidential advice from an experienced forensic examiner before rushing in. The «do it yourself» route is a risky strategy which may have far reaching effects. If you are committed to using in house staff, remember the basics of evidential integrity and don’t be tempted to use short cuts.
“When carried out correctly, forensic analysis of computer systems involved in abuse can provide valuable evidence which might otherwise have been lost or overlooked. Performed wrongly but with good intent and your evidence could give the guilty the opportunity they need to get a case dismissed.”
Andrew Sheldon, Computer Forensics and Audit Consultant
There is no simple single accepted definition of forward secrecy. However, the essence of the term is that it implies that a compromise of the current key should not compromise any future key (or earlier key, for that matter – which would be ‹backward secrecy›).
On this basis, Kerberos is not forward secret, since compromising a client’s password (and therefore compromising the key shared by the client and the authentication server) compromises future session keys shared by the client and the ticket-granting server.
File Transfer Protocol
The protocol by which files can be transferred over a TCP/IP network. Anonymous FTP is the system which allows transfer of files over the internet where the user receiving the files need not have a valid account name and password on the system being accessed. He does, though, only have access to files which have been designated as available to anonymous ftp users by the system administrator.
There is a continuing debate in security circles on whether details about bugs and vulnerabilities should or should not be published. Full publication, for example, on the Internet in discussion groups such as BugTraq, is known as ‹full disclosure›. At its most extreme, this would include the publication of an exploit to take advantage of the vulnerability.
Generally, but not necessarily, there are two camps: vendors tend not to agree with full disclosure; many independent security consultants believe it is necessary (although many believe that ‹partial› disclosure would be better).
The argument against full disclosure claims that it simply arms a horde of script kiddies who can then wreak havoc with the vulnerability.
The argument in favor of full disclosure claims that it is the only way to ensure that software vendors take security seriously and actually fix the bugs that are found. In Crypto-Gram, Nov 15 2001, Bruce Schneier wrote: “What we’ve learned during the past eight or so years is that full disclosure helps much more than it hurts. Since full disclosure has become the norm, the computer industry has transformed itself from a group of companies that ignores security and belittles vulnerabilities into one that fixes vulnerabilities as quickly as possible. A few companies are even going further, and taking security seriously enough to attempt to build quality software from the beginning: to fix vulnerabilities before the product is released…”
This was written partly in response to an essay by Scott Culp, manager of the security response center at Microsoft, who described full disclosure as “information anarchy”. “In his essay, Culp compares the practice of publishing vulnerabilities to shouting “Fire” in a crowded movie theater. What he forgets is that there actually is a fire; the vulnerabilities exist regardless. Blaming the person who disclosed the vulnerability is like imprisoning the person who first saw the flames,” counters Schneier.
In November 2001, two mature students at Cambridge University published a methodology for “Extracting a 3DES key from an IBM 4758”. This is a secure cryptographic co-processor used by banks and governments. “We are able, by a mixture of sleight-of-hand and raw processing power, to persuade an IBM 4758 running IBM’s ATM (cash machine) support software called the “Common Cryptographic Architecture” (CCA) to export any and this program’s DES and 3DES keys to us. All we need is:
* about 20 minutes uninterrupted access to the device
* one person’s ability to use the Combine_Key_Parts permission
* a standard off-the-shelf $995 FPGA evaluation board from Altera
* about two days of “cracking” time…”
IBM is quoted as responding with:“The students have assumed the lack of a normal series of bank controls that would make this attack futile in anything but a laboratory environment… Nevertheless, we will address this issue with our customers in the immediate future.”
There are two points to make here. The first is that while not catestrophic, this problem is more serious than IBM implies. A bent insider or contract programmer could cause havoc. “A crooked bank manager could duplicate our work on a Monday and be off to Bermuda by Wednesday afternoon,” said one of the students.
The second point is that IBM had been informed about the potential for this attack almost a year earlier — and had apparently done nothing. Now, after the attack was published, IBM commented “we will address this issue with our customers in the immediate future.”
And remember that IBM actually has one of the better security records! So — you must decide for yourself: does full disclosure make the security world a safer or more dangerous place?
Full disclosure is now under legal threat via DMCA. At the end of July 2002, HP wrote to SnoSoft (who had earlier released details of a buffer overflow attack against Tru64 Unix) alleging that their full disclosure on BugTraq violated both the DMCA and the Computer Fraud and Abuse Act. The letter threatened that it “could be fined up to $500,000 and imprisoned for up to five years”, and that HP reserved “the right to sue SnoSoft and its members “for monies and damages caused by the posting and any use of the buffer overflow exploit”.
(Government Access to Keys)
Government access to decryption keys is considered by many to be the overriding desire of most national security agencies. It is a continuing battleground between law enforcement agencies (LEAs) and civil liberty groups. And it has been going on for many years.
The civil liberty view is that where a government can freely obtain keys to decrypt intercepted messages, there can be no personal privacy. And where there is no privacy, there can ultimately be no freedom.
The LEAs› view is that GAK is necessary to protect the law abiding public from the triple bogeymen of paedophiles, drug barons and terrorists.
Individual readers must make up their own minds.
This author believes that GAK is an abhorrence that has no place outside of a fascist police state. He has come across no statistical evidence presented by any government agency that can demonstrate any likelihood that GAK will materially alter LEAs› task of law enforcement; and can see no ultimate purpose for GAK beyond a means of political control.
GAK exists in the UK courtesy of the RIP Act.
Esther Dyson, an advisor to President Clinton and head of an international agency that sets policy for the Internet, declared that GAK in RIP would turn the UK into a police state. She said, just prior to the bill becoming law, «The UK is not uniquely clueless on this. This is what governments do, they control things. But the Government needs to have the courage and the faith to leave people alone.»
In the UK, the Government Communications HeadQuarters.
GCHQ is the UK’s electronic intelligence collection agency – the jargon term for this is SIGINT – short for Signals Intelligence. It has its HQ in Cheltenham and its collection facilities are located at many sites both in the UK and overseas. It undertakes collection, decryption, language translation and, for some traffic, interpretation as well. For other types of traffic it acts as a primary collection and code breaking agency but passes the resulting information to expert cells in other government departments for interpretation (for example, the Defence Intelligence Staffs in MOD).
It has enormous collection resources, shared with NSA, and a wide range of general purpose and custom designed computer systems for code breaking. GCHQ is a part of the Foreign and Commonwealth Office and some details of its functions and the statutory basis for them are set out on its web site. Historically its role has been the collection of intelligence information but its statutory duties (set out on its web site) include:
«to monitor or interfere with electromagnetic, acoustic and other emissions and any equipment producing such emissions and to obtain and provide information derived from or related to such emissions or equipment and from encrypted material»
This shows that it is allowed to interfere and disrupt communications systems and services if it chooses to do so. There are some within government who believe that the above description gives GCHQ a mandate to penetrate computer systems both for information collection and for active disruption and deception attacks. However others dispute this and believe that there are immense legal probelms in this area of operation. So far these uncertainties appear to have limited the extent to which GCHQ has deployed operational capabilities in this area (called Offensive Information Warfare).
Geography is sometimes treated as the fourth factor in access control. It does not necessarily imply the physical geographic location of the user, but rather the actual terminal/workstation being used. Access for a particular user ID could be limited to a particular or range of workstations, or excluded from a particular or range of workstations.
A UK journalist who, in 1984, was arrested with Robert Schifreen for alleged computer hacking. (Readers with long memories may remember something to do with the Duke of Edinburgh’s mailbox…) Acquitted on appeal. The case, brought under the 1981 Forgery and Counterfeiting Act because of the absence of any more suitable legislation, was responsible for the drafting and ultimate passing of the 1990 UK Computer Misuse Act.
Good Times – a hoax virus
Good Times is a hoax virus – possibly the most successful ever. It was/is an e-mail message that warns of a new virus, and exhorts the reader to ‹Forward this to all your friends. It may help them a lot.› As a result, the hoax itself spreads like a virus, clogging systems and eating bandwidth. But there never was a Good Times virus.
The original message started in November/December 1994. It stated:
Here is some important information. Beware of a file called Goodtimes. Happy Chanukah everyone, and be careful out there. There is a virus on America Online being sent by E-Mail. If you get anything called «Good Times», DON’T read it or download it. It is a virus that will erase your hard drive. Forward this to all your friends. It may help them a lot.
Some time later a revised and expanded message appeared:
The FCC released a warning last Wednesday concerning a matter of major importance to any regular user of the InterNet. Apparently, a new computer virus has been engineered by a user of America Online that is unparalleled in its destructive capability. Other, more well-known viruses such as Stoned, Airwolf, and Michaelangelo pale in comparison to the prospects of this newest creation by a warped mentality.
What makes this virus so terrifying, said the FCC, is the fact that no program needs to be exchanged for a new computer to be infected. It can be spread through the existing e-mail systems of the InterNet. Once a computer is infected, one of several things can happen. If the computer contains a hard drive, that will most likely be destroyed. If the program is not stopped, the computer’s processor will be placed in an nth-complexity infinite binary loop – which can severely damage the processor if left running that way too long. Unfortunately, most novice computer users will not realize what is happening until it is far too late.
Government Technical Assistance Centre (GTAC)
In the UK, GTAC has been established at MI5 HQ in London. The provisions of the RIP Act require ISPs (Internet service providers) in the U.K. to track all data traffic passing through their computers and route it to the Government Technical Assistance Center (GTAC) at Thames House, Millbank.
According to Duncan Campbell, “Its primary purpose will be to break codes used for private email or to protect files on personal computers. It will also receive and hold private keys to codes which British computer users may be compelled to give to the government, under the RIP Act.”
Gramm-Leach-Bliley Act (GLBA)
Enacted on November 12, 1999, US Public Law 106-102 contains privacy provisions relating to consumers› financial information. It imposes restrictions on the disclosure of consumers› personal financial information to nonaffiliated third parties. It requires that financial institutions must provide notices of their information collection and sharing practices, and allow consumers to ‹opt out› if they do not wish their information to be shared with nonaffiliated third parties.
It also provides specific exceptions under which the financial institution may share personal information and from which the consumer may not opt out.
Further details can be found on the Federal Trade Commission (FTC) website: http://www.ftc.gov/.
Granularity is defined as an expression of the relative size of a data object. Fine granularity refers to small data objects (such as fields), while coarse granularity refers to larger data objects (such as files).
The term is often used within information security to describe the level of control that can be obtained. Thus, within access control mechanisms, fine granularity should imply access control capabilities to the field level, while coarse granularity would imply control only to the file level.
However, it is an expression of the relative size – and files are more finely granular than folders. There is nothing to stop vendors claiming ‹fine granularity› for a product that most security consultants would consider to provide only ‹coarse granularity›.
The term is also sometimes used for non-data objects. Thus, the term ‹the granularity of a single user› has been used mean that the access control mechanism can be adjusted to include or exclude any single user as opposed to a group of users.
Government Secure intranet (UK).
GSi started life as the main vehicle by which the UK Government sought to fulfill its pledge of enabling 25% of dealings with the public to be performed online by 2002. It entered full service in February 1998, when its purpose was to provide a secure network for government; managed access to the Internet; and a platform for shared services. It was sponsored by the Central IT Unit (CITU); managed by CCTA; and provided by Cable and Wireless.
This has now all changed. In 2003 a re-procurement project resulted in a change of supplier from Cable & Wireless to Energis. Those connected to the C&W network are being migrated across to the Energis network; and once this is complete (during 2005) the legacy network will be switched off. Fundamentally the new network offers all the same services as before but with a number of enhancements and additional services.
Furthermore, CCTA no longer exists – the GSi is now managed by OGCbuying.solutions (an agency of the Office of Government Commerce) – and it is no longer sponsored by CITU (which also no longer exists).
The Government Gateway has become the main channel for public interaction with government (paying tax online, etcetera).
The GSi’s main purpose is to enable connected organisations to communicate securely, electronically. The available connection types are: GSi (standard restricted connection), xGSi (confidential connection), GSX (aimed primarily at local authorities) and GSE (the means for external suppliers to connect to the government organisations they work with – the government org concerned must sponsor such connections).
The GSi offers, as standard, a number of Intranet facilities:
* inter-departmental e-mail, without the need for encryption, for material up to and including Restricted;
* e-mail to the Internet;
* web browsing on both GSi and the Internet;
* file transfer;
* directory services.
The GSi allows departments to publish material for the benefit of government as a whole; professional groups to establish collaborative facilities for information and discussion; and shared purchasing of facilities such as news feeds and database access.
New organisations must now sign a Code of Practice and comply with the Code of Connection relating to their choice of connection type. Further details are available at www.ogcbuyingsolutions.gov.uk/gsi_notice.asp.
Organisations migrating from the legacy to the new network are also required to update their compliance to that of the Code of Connection.
Technical and Security Features
GSi is an IP-based network accredited to Restricted High. The main protocols are Internet-based: SMTP for mail; HTTP for browsing. Connecting departments must subscribe to the GSi Community Security Policy (CSP), which establishes minimum standards in order to protect the GSi community as a whole. The GSi is protected from the Internet by an E3 accredited firewall conforming to CESG memorandum 13.
A hacker is someone with deep knowledge of and great interest in a system. A hacker is someone who likes to delve into the inner workings of a system to find out how it works.
The origin of the term is not clear. Some trace it back to the Model Railroad Club at the Massachusetts Institute of Technology in the ’50s – others to early radio enthusiasts.
Today the meaning is in transition. It is being used to describe any computer enthusiast who uses his or her knowledge to break into some other person’s computer. This is an unfortunate semantic tendency since it loses some genuine fine distinctions.
We prefer to distinguish crackers as the computer enthusiasts who break into computers.
The genuine hacker is more likely to use his or her own computer, or someone else’s computer with permission and approval. The genuine hacker will look for weaknesses in the system, but will publish his or her discoveries. The cracker is more likely to keep discoveries secret or disclosed only to other crackers.
In security terms, put very simply, a genuine hacker is a good guy; a cracker is a bad guy.
Hacktivism is a relatively new term for politically-motivated hacking. The term demonstrates how the two terms ‹hacker‹ and ‹cracker‹ are becoming confused – since there is malicious intent involved, it would be best described as ‹cracktivism›.
Nevertheless, hacktivism is what we’ve got. It describes a growing tendency for politically motivated crackers to break into sites and leave a political comment. It is particularly prevalent at times of heightened tension between two nations. Other targets include corporations with bad environmental reputations, laboratories thought to experiment on animals, and so on.
I have little doubt that there are some genuine hacktivists. I have even less doubt that the vast majority are simple crackers and script kiddies trying to mask a lack of morality in the higher ideals and romanticism of fighting for a just cause.
Effectively an issue of corporations› liability to their own and other companies› employees. Recent court cases have involved the use of email to transfer information of a sexual, racial or generally offensive nature. Organisations need to realise that increasingly it is the organisation and not the employee that is seen to own email. For this reason the onus is on employers to demonstrate that they have not only the security policy but also the technology and controls to police this, such as by scanning for key words or tracking of email sources.
from Content Technologies› Guide to Content Security, 2nd Ed
Hardening is the process of optimising a system’s security configuration. It is a term usually applied to operating systems. Hardening NT/2000 would, for example, require you to
* Disable unused and Guest accounts
* Disable unused NT Services
* Shut down all un-neccesary services
* Disallow all unnecessary applications
It is a process that should be applied to all operating systems.
Many firewalls run on pre-hardened operating systems. Microsoft’s ISA server, for example, includes a built-in System Hardening Wizard that “locks down” the underlying Windows 2000 operating system by disabling services unrelated to ISA Server.
A security model that provides policies for changing access rights and rights for the creation and deletion of subjects and objects. This does not exist in the Bell LaPadula model. It is based on a set of subjects, a set of objects, a set of access rights and an access matrix. It is generally considered to be one of the more complex security models.
The ability of a virus scanner to identify a potential virus by analysing the behavior of the program, rather than looking for a known virus signature.
In general, heuristic analysis is not as reliable as signature-based virus scanning as it is not possible to predict precisely what a program will do when executed. However, heuristic scanning is a useful addition to any anti-virus policy.
The main disadvantage of heuristic scanning is that the product often produces false alarms when perfectly innocent code is suspected of behaving as a virus might. The main danger with anti-virus software that produces multiple false alarms is that users will eventually start to take no notice of the false alarms, providing the possibility that a genuine virus outbreak will be missed.
Hijacking describes an attack where an active, established, session is intercepted and co-opted by the attacker.
In its simplest form this could be desktop hijacking (using other people’s terminals while they are away getting coffee).
A more advanced form is known as IP hijacking. This is made possible by the attacker knowing or guessing an IP session’s ISN (Initial Sequence Number) sequence. The ISN is used by TCP to sequence and verifiy the individual packets.
Since authentication has already taken place, the hijacker gains all of the privileges of the original client that is being hijacked. A meaningful attack, however, requires that the attacker already knows both the client and the host, and that the client has sufficient privileges for the attacker’s purposes.
It is, furthermore, an old and well understood Internet vulnerability. Several systems randomize the ISN to make guessing the correct next number more difficult — while encryption defeats it altogether.
The Health Insurance Portability & Accountability Act of 1996 (August 21), is US Public Law 104-191. It amends the Internal Revenue Service Code of 1986, and is also known as the Kennedy-Kassebaum Act.
HIPAA is designed to protect the confidentiality of patient records, past, present and future. It affects all healthcare providers, and is supported by severe civil and criminal penalties for noncompliance. Compliance with the Privacy Rule is required by April 14, 2003.
A useful information site can be found at http://www.hipaadvisory.com/.
The History List is a list of pages you have recently visited on the Internet using Internet Explorer. It can be very useful in allowing you to quickly locate and revisit that page you noticed last week.
There are three views of ‹history›:
* it is what you can remember
* it is bunk
* it is something we can learn from
It is the third view that has many people concerned. The History List, showing where we have been on the Internet, can actually disclose a lot of information about us. This can help develop a user profile, which is a distinct invasion of privacy. There are Trojans that can steal our History; there are cookies that help build a History for the use of third party marketing companies. And luckily, for the paranoid like me, there are applications that periodically and automatically wipe clean both the History and any cookies that have been installed.
The purpose is to retain all the advantages of an existing hash function while enabling easy replacement of that hash function should a faster or stronger one become available or required.
The Internet has long been used to perpetrate hoaxes because of the ease with which ‹forgeries› can be made. Usually these are meant to be humorous, such as the many spoof press releases sending up companies such as Microsoft and IBM. Sometimes, however, they can be more sinister. In the Summer of 2000 a false press release about Emulex was distributed. Within an hour of its publication, Emulex suffered a 62% fall in its stock value.
The problem is that there are few easy ways in which information appearing on the Internet can be checked.
The most common hoax, however, is the hoax virus. This usually consists of an e-mail message warning recipients about a new and terribly destructive virus. It ends by suggesting that the reader should warn his or her friends and colleagues, perhaps by simply forwarding the original message to everyone in their address book…
The result is a rapidly growing proliferation of pointless e-mails that can increase to such an extent that they overload systems. Ironically, although a hoax is not a virus, a successful hoax virus nevertheless displays many of the characteristics of a real computer virus. It includes a mission component (denial of service), a trigger component (fear), and a self-replicating component (the gullibility of socially engineered users).
Another term for a vulnerability.
A ‹black hole› is a term used to describe the ether into which e-mails sometimes fall when they mysteriously disappear between the sender and the recipient (that is, they are sent but do not arrive, and do not generate a bounce message).
In some senses it is analagous to it’s in the post, that mythical realm that contains so many lost payments in the physical world – but with more justification and a greater likelihood of being genuine.
Homeland Security Act
The Homeland Security Act was signed by President Bush on November 25, 2002. Like the Patriot Act, it was enacted in response to the terrorist attack of 9/11, 2001.
The Homeland Security Act created the new Department of Homeland Security (DHS) with huge responsibilities and powers. The DHS, which came into existence on January 24, 2003, consolidates 22 separate agencies (including, for example, the Secret Service, the Immigration and Naturalization Service, the Coast Guard and the Border Patrol) into a new Cabinet department with 170,000 employees. It has wide-ranging authority to compile, analyze, and mine the personal information of American citizens.
Like the UK’s RIP Act, the Homeland Security Act highlights the inherent tensions between Government desire for control and individuals› desire for freedom and privacy. These two things are incompatible, and this Act gives the us authorities massive powers for monitoring and surveillance on the Internet. It includes
* responsibility for critical infrastructure protection
* expands the ability of police to conduct Internet or telephone eavesdropping without first obtaining a court order
* restricts freedom of information in relation to the critical infrastructure
* establishes an office designed to become “the national focal point for work on law enforcement technology”
* creates a Directorate that is charged with analyzing vulnerabilities in systems including the Internet, telephone networks and other critical infrastructures
A honeypot is a system or resource that is designed to be attractive to crackers.
Honeypots can simply be lures to keep crackers away from the important stuff, or they can be traps designed to attract the hacker and then find enough information to track him or her down. Since a honeypot has no legitimate users or traffic, any intruder is exposed and relatively easy to monitor.
A study by Global Integrity published in September 2000 found:
* a moderate proportion of attackers refrain from attacking systems within networks in which they know that measures have been taken to capture and monitor their actions
* honeypots are an excellent method of detecting insider attacks
* honeypots sidetrack attackers› efforts, causing them to devote their attention to activities that can cause neither harm nor loss
* honeypots allow security administrators to study exactly what attackers are doing without exposing systems or networks to additional risk that results from compromised systems.
Hypertext Markup Language. The standard language used to create Web pages. Originally based on SGML, note that current versions of HTML have active verbs that cause actions in browsers. These active commands can be used in combination to form content security threats such as cyberwoozles and logic bombs.
from Content Technologies› Guide to Content Security, 2nd Ed
Hypertext Mark-Up Language
In most cases, this sounds worse than it actually is, because embedded programs downloaded from remote Web sites are usually executed by your browser with extremely limited powers. By default, for example, Java programs from remote sites are unable to manipulate system configuration information, and are prevented from writing to the local hard disk. A similar sort of security restriction applies to VBS programs. This means that Java and VBS viruses cannot, in general, spread via Web site access.
Of course, the above remarks do not always apply. By relaxing the security settings (the details of which depend on both operating system and browser), it is possible to create an environment in which both VBS-based and Java-based viruses could spread automatically from a Web site to your computer. Over time, you may find that you need to adjust security settings on your computer in order to get the most from the sites you use frequently on the Web. Before making such adjustments, be sure that you understand the risks associated with them.
from Sophos› V-Files
The general movement from text-based e-mail clients to HTML-based e-mail clients poses a significant new threat. Text-based e-mail poses virtually no threat (apart, perhaps, from hoaxes). However, if you use an HTML-based e-mail client, it can no longer be said that you cannot catch a virus (or Trojan) from an e-mail.
Hypertext Transfer Protocol.
Used to transfer HTML and other objects to ‹net browsers›. Note that other types of client now use HTTP protocols.
from Content Technologies› Guide to Content Security, 2nd Ed
Internet Corporation for Assigned Names and Numbers – a non-profit, private corporation formed in October 1998 responsible for IP address space allocation, protocol parameter assignment, domain name system management, and root server system management functions (formerly undertaken by IANA and other entities under U.S. Government contract).
The Internet Protocol Suite contains numerous parameters, such as internet addresses, domain names, autonomous system numbers, protocol numbers, port numbers, management information base object identifiers, and many others. It is necessary that the values used in these parameter fields be assigned uniquely. ICANN makes the assignments as requested and maintains a registry of the current values.
A denial of service attack that sends a host more ICMP echo request («ping») packets than the protocol implementation can handle.
National ID Cards will increasingly be introduced by governments in the war against international terrorism, fraud, illegal immigration and so on. The cards will most likely be either magnetic stripe cards like the majority of current bank cards, or smart cards – probably the former in the short term and the latter in the long term.
ID cards on smart cards will become an increasing threat to security and privacy and liberty. They will become magnets for more and more personal data: national insurance numbers, driving license details, bank account numbers, computer access codes, digital signatures, DNA details, health details, and so on. This is inevitable over time once they have been introduced. Authoritarian governments will then be able to use the cards as a means of controlling the people rather than just a means of identifying the people.
An Identity Cards Bill for the United Kingdom received its second reading in Parliament at the end of 2004. The Bill can be found at Identity Cards Bill and should be required reading for anybody concerned with security (national and personal) and civil liberties.
Identity theft occurs when a criminal misappropriates someone’s personal information (such as social security number, credit card number, passwords, etc) for their own use. The purpose is generally to commit fraud or theft.
It is a serious and growing problem; and is likely to get worse as always-on broadband spreads to home users. Home users are traditionally less security-conscious than businesses, and many home users store personal data on their PCs with little protection.
The US Government maintains a website with information on identity theft at http://www.consumer.gov/idtheft/.
The Institute of Electrical and Electronic Engineers. The Institute was formed by a merger of the American Institute of Electrical Engineers and the Institute of Radio Engineers in 1963. The IEEE is a pivitol standards setting body resposible for many of the standards used in todays data networks including such well known standards as IEEE 802.3 (Ethernet) and 802.5 (Token Ring).
This is an Institute of Electrical and Electronics Engineers committee that develops security standards for local area networks. The standard being developed has 8 parts:
a. Model, including security management
b. Secure Data Exchange protocol
c. Key Management
d. – has now been incorporated in ‹a› –
e. SDE Over Ethernet 2.0
f. SDE Sublayer Management
g. SDE Security Labels
h. SDE PICS Conformance.
Parts b, e, f, g, and h are incorporated in IEEE Standard 802.10-1998.
International Law Enforcement Telecommunications Seminar
An international co-ordination group established by the FBI in the early 1990s – thought to be behind the growing worldwide tendency for governments to establish ‹legal interception› of telecommunications. It is generally believed that the NSA has a guiding but low key involvment.
The main participants are the EU governments and the Echelon partners. When telephone companies were all nationalised organisations it was relatively easy for governments to intercept communications. With privatisation, a rapid proliferation of new telecoms companies and technological advances it is no longer so easy. Hence ILETS.
An insider is a user with legitimate access to company systems — usually an employee of the company.
The need to secure systems against one’s own staff is often overlooked, and explains why a consistently very high proportion of all security incidents are caused by insiders. In many cases, of course, it can be inadvertent mischief; but many insider incidents are caused maliciously. Many others probably go undetected since there is insufficient security controls to prevent them, and insufficient system logging to detect them.
Defense against insider attacks include
* access control
* an adequate and enforced security policy
* staff security awareness training
More properly ‹data integrity›, it is the property that the data in question has not been changed. Maintaining demonstrable data integrity is one of the cardinal aims of data security (see also accountability, availability and confidentiality).
It is particularly important that integrity and confidentiality be combined, so that sensitive information can be neither altered without being read, nor read without being altered.
Data whose integrity has failed is said to be corrupted. Data whose confidentiality has failed is said to be compromised.
See also integrity checker.
An integrity checker is a defense against Trojan Horses and viruses – a form of intrusion detection where the intruder is active rather than just passive. It is designed to issue an alert if a binary file is altered (compromised) in any way.
Integrity checkers use checksums. A checksum is calculated and stored. The process is regularly repeated, and the new checksum compared to the stored checksum. If the two values are identical, the binary file has not been altered. If there is any difference in the checksum, then you can be certain that the binary file has been either corrupted or compromised. It could have been corrupted by natural events or the malicious intervention of an intruder; or it could have been compromised by infection from malware.
International Data Encryption Algorithm (IDEA)
IDEA is a symmetric 64-bit block cipher with a 128-bit key. It was developed by Xuejia Lai and James Massey in Switzerland and released in 1992. It is, however, patented in both the USA and Europe, and must be licensed for commercial use.
The intention was to provide a replacement for DES, which it achieves successfully. It is considered to be a strong algorithm and has a key length long enough (128-bits) to make brute force attacks unrealistic.
IDEA was the algorithm originally selected by Phil Zimmerman to provide the secret key part of PGP.
A term for the single, interconnected, worldwide system of computer networks that share the set of protocols (the Internet Protocol Suite, or TCP/IP) specified by the IAB [R2026]; and the name and address spaces managed by the Internet Corporation for Assigned Names and Numbers (ICANN). The term is often used synonymously with «the information superhighway», and the World Wide Web (WWW). The latter is, in reality, a service that exists within or on the Internet, but is not the whole of the Internet.
It started with four interconnected computers in 1969 and was known as ARPAnet. It now, and increasingly so, forms the basis of the global economy.
Note that the word internet is short for ‹internetwork›; that is, a network of networks. The Internet is the complete network of all the networks interconnected using TCP/IP. An internet is any network of networks.
The security issue is that whenever you have a live connection to the Internet, you can guarantee that you are on the same network at the same time as thousands, possibly hundreds of thousands, of hackers and crackers who are looking for systems like yours to compromise.
Internet Protocol (IP)
The Internet standard protocol that provides a common layer over dissimilar networks. It is used to move datagrams (discrete sets of bits) between host computers and through gateways if necessary. It does not provide the reliable delivery, flow control, sequencing, or other end-to-end services that TCP provides.
The Internet Trail is the trace, left by your browser, of where you have been on the Internet.
On Internet Explorer, for example, there is a History Folder that contains links to the web pages you have visited – it’s a recorded history of where you have been. This History Folder is part, but not all, of your Internet Trail.
The Internet Trail is non-specific – it includes anything that can be used to identify places you have visited on the Internet. So it can include, for example, all the files, including the graphics, held in your temporary internet folder.
Tracking cookies, that can tell other websites where you have been, are also part of your Internet Trail…
An intruder is an entity that has gained access to a computer without proper or valid authorization so to do. Note that an intruder could thus be a hacker or a cracker who is illegally logged on to the system, or a Trojan Horse that has been left behind by a cracker. Similarly, the intruder could be someone who has gained illegal access via the firewall from the Internet, or an employee who has gained access to restricted parts of the system from inside the firewall.
Once past the firewall, an intruder is free to do whatever he or she wants. Intrusion detection systems are designed to recognize and react to anomalous behavior that might indicate the presence of an intruder on the system.
Intrusion Detection System (IDS)
Intrusion Prevention System (IPS)
Intrusion Prevention Systems are one of the more recent developments in the security armoury. However, how they differ from IDS or even advanced firewalls is sometimes difficult to see. This has led many commentators to dismiss IPS as merely a marketing device.
The key difference is in the naming: IDS is fundamentally designed to detect an intrusion that has already happened; IPS is designed to prevent the intrusion from succeeding. So in one sense, IDS can be seen as belonging inside the firewall, while IPS belongs at or outside of the firewall. Which means that another way of looking at IPS is as an advanced form of or adjunct to the good old firewall itself.
“A numerical address allocated to identify nodes on a TCP/IP network. These addresses can be statically or dynamically allocated. In the case of dynamically allocated addresses, provision often has to be made to keep firewalls aware of which system is using which IP address…
“IPV6 will allow for more effective dynamic allocation of IP addresses. This in turn, however, may cause further traceability issues, where hackers can use an IP address for a few seconds, before changing to another.”
from Content Technologies› Guide to Content Security, 2nd Ed
An IP version 4 [R0791] address comprises a series of four 8-bit numbers separated by periods. For example, the address of the host named «a_domain.com» could be 188.8.131.52.
An IP version 6 [R2373] address is written as x:x:x:x:x:x:x:x, where each «x» is the hexadecimal value of one of the eight 16-bit parts of the address. For example, 1081:0:2:0:8:900:200B:417C and FADC:BA98:7644:3220:FEBC:BC98:7854:3910.
IP spoofing involves imitating a trusted IP address in order to gain access to protected information resources. One method is by exploiting source routing in IPv4. This allows the originator of a datagram to specify certain, or even all intermediate routers that the datagram must pass through on its way to the destination address. Effectively, you make the destination host think that you are a known and trusted host rather than a schoolkid on his father’s laptop.
The Internet Protocol Security Architecture (IPsec), produced by the Security Working Group of the IETF. In 1995 IPsec was proposed as an option to be implemented with IPv4 and as an extension header in IPv6. (Implementation of IPsec protocols is optional for IPv4, but mandatory for IPv6.)
The IPsec architecture specifies:
* security protocols
* AH – Authentication Header, which essentially allows authentication of the sender of data
* ESP – Encapsulating Security Payload, which supports both authentication of the sender and encryption of data),
* security associations (what they are, how they work, how they are managed, and associated processing),
* key management (IKE),
* algorithms for authentication and encryption.
The set of security services include access control service, connectionless data integrity service, data origin authentication service, protection against replays (detection of the arrival of duplicate datagrams, within a constrained window), data confidentiality service, and limited traffic flow confidentiality.
(The letters ‹sec› should be lower case.)
IPsec is described in RFC 3193: Securing L2TP using IPsec.
Internet Relay Chat. A worldwide «party line» protocol that allows real-time text conversations across the Internet. Users can connect to a chat server with several ‹rooms›. They can enter a room and talk to others in the same room in real time. Many users can participate at once. People using IRC often adopt different personalities and communicate with language and behaviours that you would not normally expect to see.
IRC is structured as a network of Internet servers, each of which accepts connections from client programs, one per user. IRC discussions can be frank, uncensored and usually unmoderated.
IRC provides a view of the best and the worst of the Internet. On the one hand it is uncensored and international, allowing the free exchange of ideas. But on the other hand these ideas may not always conform to society’s best practices. Parents, for example, should always monitor children’s use of IRC since paedophiles and other perverts have been known to lurk in child chat rooms.
Nor is IRC free of traditional security problems. IRC Worms have exploited a security design flaw of mIRC software that allows the default script file (SCRIPT.INI) to be overwritten when files are transferred using the DCC protocol. And in Worst Nightmares Come Alive, Roelof Temmingh describes how IRC could be used for mass ‹infection›:
The package will be distributed on the Internet. This is done by «robots». These «robots» will upload the infected package to FTP servers, mass mail the package to users, repackage existing software to contain the AI, and DCC the package at random to users connected to IRC servers. The ‹net should be flooded by infected programs, all different in size and apparent functionality.
The SubSeven DEFCON8 2.1 Backdoor can be controlled remotely via IRC commands to do things like launch a Distributed Denial of Service attack, and to notify the attacker each time a new machine is infected.
International Organization for Standardization – a voluntary, non-treaty, non-government organization, established in 1947, with voting members that are designated standards bodies of participating nations and non-voting observer organizations.
The development process has four levels of increasing maturity:
* Working Draft (WD)
* Committee Draft (CD)
* Draft International Standard (DIS)
* International Standard (IS).
International Traffic in Arms Regulations. Rules issued by the U.S. State Department, by authority of the Arms Export Control Act (22 U.S.C. 2778), to control export and import of defense articles and defense services.
ITAR regulations were used to control the export of US cryptographic products by declaring them to be effectively ‹munitions›. At the end of 1996, cryptography export was transferred to the Export Administration Regulations of the Department of Commerce. The export policy was relaxed to favor export of data-recovery cryptography.
Information Technology Security Evaluation Criteria, a scheme for the evaluation of security products run in the UK by the DTI and CESG.
ITSEC was probably the most successful computer security evaluation criteria of the 1990s. It offers greater flexibility than TCSEC and is easier and cheaper to use. It has also benefited from being developed after Orange Book and being closer to modern technology.
Another factor in its success has been ‹mutual recognition›. Being European criteria, and having an established common methodology, it has been comparatively straightforward to negotiate recognition agreements. This process started in 1996 with a bilateral agreement between the UK and Germany. It was added to by an agreement between the UK and France in 1997. And in March 1998, an agreement was signed by the UK, France, Finland, Germany, Greece, the Netherlands, Norway, Portugal, Spain, Sweden and Switzerland, whereby ITSEC certificates issued by any of the Qualifying Certification Bodies would be recognized in all other countries. The initial Qualifying Certification Bodies were those of the UK, France and Germany.
“In May 1990 France, Germany, the Netherlands and the United Kingdom published the Information Technology Security Evaluation Criteria (ITSEC) based on existing work in their respective countries. Following extensive international review, Version 1.2 was subsequently published in June 1991 by the Commission of the European Communities for operational use within evaluation and certification schemes.
“ITSEC is a structured set of criteria for evaluating computer security within products and systems. Each evaluation involves a detailed examination of IT security features culminating in comprehensive and informed functional and penetration testing. This work is undertaken using an agreed Security Target as the baseline for ensuring that a product or system meets its security specification. ITSEC operates the concept of assurance levels E0 to E6. This scale represents ascending levels of confidence that can be placed in the TOEs security functions and determines the rigour of the evaluation. Since the launch of ITSEC in 1990, a number of other European countries have agreed to recognise the validity of ITSEC evaluations.”
from UK ITSEC FAQ
International Telecommunications Union, Telecommunication Standardization Sector (formerly “CCITT”), a United Nations treaty organization that is composed mainly of postal, telephone, and telegraph authorities of the member countries and that publishes standards called “Recommendations”.
X.400 is the MHS (Mail Handling System) that is the alternative standard e-mail architecture to the SMTP used by the Internet.
X.435 is designed to support Electronic Data Interchange (EDI) messaging.
X.500 specifies directory services.
X.509 specifies the authentication service for X.500 directories, as well as the de facto digital certificate syntax..
ITU-T cooperates with ISO on communication protocol standards, and many Recommendations in that area are also published as an ISO standard with an ISO name and number.
A programming language developed by Sun Microsystems. It resembles C++, but was designed to avoid some of C++’s most notorious flaws. It is believed by many observers to be well-suited to WWW-based application development, particularly because of its cross-platform nature, and its sandbox security concept.
“There are situations when a Java applet (small Java application) can raise content security issues, such as having more access than perhaps the user intended, or being able to make unauthorised removal of data from the user’s system. These could be:
* when run as a plug-in for a Web browser
* when run from a file:// based URL
* when run outside a browser environment (unless other security is available).
According to the 5th Black Box Network Industry Survey, 4% of those with Internet access said their organisations were seriously affected by damage caused by downloaded Java applets.”
from Content Technologies› Guide to Content Security, 2nd Ed
A portable, platform-independent means of creating reusable components. Created by Sun Microsystems, Java Beans is intended to be similar in functionality to OpenDOC, ActiveX, OLE, and COM, but more “open” because of the cross-platform nature of Java.
The process of removing Java tags from HTML pages during download in order to prevent the Java code from executing on the client machine.
* Web Browsers: used for client-side scripts on web pages to provide dynamic content at the browser, such as forms validation, navigation aids, visual effects.
WSH was originally provided with NT 4 Option Pack and Windows 98. It is also available for Windows 95.
from Jim Press, ICL
A Joe-job is spam that uses a forged return address with the specific intention of damaging or tarnishing the reputation of the forged domain.
The term comes from such an action used against Joe’s Cybershop in 1996. Joe Doll (proprietor) provided free web pages under certain conditions. One ‹customer› advertised his page using spam. Joe Doll warned him to stop. The customer stopped then restarted. Joe Doll removed the customer’s web page. “The response was swift, massive and ugly. It included threats, forged messages to spam lists, and mail bombs. Enraged victims have mounted mail, ping, syn, and other attacks on joes.com, incited to vigilante justice by the forger,” Doll subsequently wrote.
The spammer had retaliated by further spams, but this time using Joe Doll’s address as a forged From address.
A Joe-job has two basic effects. Innocent spam victims can be duped into thinking that the spammer is the forged address. These victims can respond angrily either directly to the joe, or indirectly to his ISP – where the danger is that the joe could even lose his website. In reality, most people can recognize a Joe-job when they see one – which just leaves the second effect: the joe’s website can be overwhelmed by hundreds of thousands of bounced spam messages bouncing back to the From address.
Strictly speaking, since the term is based on the Joe’s Cybershop incident, it should apply solely to address forgeries where the motive is to do harm to the joe. Without this motive, it is just another example of email-spoofing spam. It is likely, however, that the term will come to be used to describe the effect of any large scale spam that uses a forged From address.
Joint Photographic Experts Group File
These files (pronounced “jay-peg”) are image bitmaps (see BMP) . They typically offer excellent compression, though with some loss in quality, and are thus popular on the Web, as they are quicker to download.
Because high compression is achieved by reducing the quality of the image, JPEG files sometimes show unnatural-looking effects in some parts of the image. These side-effects do not generally reduce the usability of the picture, especially on the Web, and are regarded as a satisfactory tradeoff of file size against image quality. JPEG files are not programs, and cannot be infected with a virus.
However, a hoax (in numerous guises) exists which warns you of viruses which infect JPEG files. The hoax asks you to look out for defects in image quality, which are said to be symptomatic of infection. Of course, the defects described are those that are common in JPEGs, but since you are unlikely to notice them until they are pointed out, the hoax appears to be credible, and you may believe you are infected.You are not.
from Sophos› V-Files
More recently, in September 2004, a critical flaw in the Microsoft software that processes JPEG images has made JPEGs potentially more dangerous. Patches to the MS software have been released, but any unpatched software will still be vulnerable. The flaw allows a buffer overflow exploit to be engineered by a specially crafted JPEG image. The result could be that the attacker would assume control of the PC at the user’s privilege level – if logged on as Administrator, the attacker would have total control over the PC.
Kerberos is a network authentication system developed by Project Athena at MIT. It uses secret keys for both encryption and authentication. It does not provide digital signatures: its purpose is to authenticate requests for network resources rather than to authenticate the authorship of documents.
Windows 2000 uses Kerberos v5 for user authentication.
One of the biggest practical problems in symmetric cryptography (and perhaps the driving force behind the development of asymmetric public key cryptography) is the secure distribution of keys. There are two elements to the problem: how do you distribute the keys securely, and how do you cope with the sheer volume of keys that are necessary in a symmetric crypto system.
Consider an organization with ten people, each of whom needs to be able to communicate securely with each of the others. Each person needs to have nine different encryption keys. So an organization of ten people needs 10(10-1); that is, 90 different secret keys. An organization of 100 people would need 100(100-1); that is, 9900 different keys. And 1000 people would need 1000(1000-1); that is 999,000 different keys. Each of these keys needs to be distributed in a secure manner to the user concerned. And, of course, the problem continues whenever a key is compromised or lost, or a member of staff leaves, joins or is sacked.
There are numerous ways in which the problem of key distribution can be tackled, each with varying degrees of security and sophistication. One popular method is the use of a key distribution center (KDC). Here, each user needs to share only one secret key – the key used to communicate with the KDC. Alice would tell the KDC that she wishes to communicate with Bob. The KDC then produces or obtains two copies of a relevant relevant key and sends one each to Alice and Bob. These keys could be for once only use, for long term use, or for a specified period.
Kerberos is an example of an encryption system that uses a Key Distribution Center.
A process used by two or more parties to exchange keys in cryptosystems.
A key exchange protocol is also called a key agreement protocol. It is the methodology by which various parties can exchange secret keys securely across an insecure network. It is therefore particularly relevant for people who decide to use symmetric cryptography. A key exchange protocol may well use some element of asymmetric cryptography for the exchange.
The best known key exchange protocol is probably Diffie-Hellman.
KEA (Key Exchange Algorithm) was developed and classified as ‹secret› by the NSA. It is similar to the Diffie Hellman algorithm using 1024-bit keys. It was de-classified by the NSA on 23 June 1998.
Some encryption programs store your encryption keys in a file where they can be conveniently accessed. Usually, the keys are themselves strongly encrypted — this means that you need to enter a pass-phrase to begin using the key file, but you do not then need to enter each key as it is used. This helps ensure that if your key file is stolen, it will be of limited use to the attacker.
Even so, you are advised not to store key files on your hard disk because of the risk of compromise. Store them on a removable floppy disk, if you can. That way there is less to go wrong. Be aware, also, that some key files are stored unencrypted (see PWL), and you are advised to avoid such files at all costs.
Recently, a virus appeared (WM97/Caligula) which can find your PGP secret keyring (if you have PGP installed) and send it via FTP, behind your back, to a virus writers› Web site. The virus styles itself “an exercise in espionage-enabled viruses”, and serves as a reminder that anything that can be done in software can be done in a virus.
from Sophos› V-Files
Key management is the administrative side of cryptography, and is one of the biggest problems faced by any crypto system. It involves the generation, certification, distribution and revocation of keys – all of which must be done in a secure manner. It can be undertaken manually, by software, or by outsourcing to a third party such as a Certification Authority. It is the difficulties of key management that make the one unbreakable crypto system, the One Time Pad, unrealistic for the commercial market.
A key pair is a set of mathematically related keys – typically the public and the associated private key in an asymmetric crypto system. The two keys are related in a way that makes it computationally infeasible to derive the private key from the public key.
Key Retrieval is the generic name given to government’s desire for the ability to decrypt intercepted messages. Since it is effectively impossible to decipher today’s strong encryptions, governments are seeking to enforce their ability to retrieve cryptographic keys. It isn’t ultimately important how this is achieved: it could be via mandatory key escrow or by hidden back doors – or by legislation such as the UK’s RIP Act.
A key ring is a device, usually a metal ring, used to keep various keys (front door, car, back door,etc) in one place for easy access.
In cryptography, a key ring is just the same – a software device designed to keep the public keys of colleagues and associates in a single place for easy access and retrieval.
This provides an additional level of security since a new hash result cannot be correctly calculated without knowledge of the secret key.
The name given to the range of possible values for a cryptographic key. Normally described in terms of bits, as in the number of bits needed to count every distinct key. The longer the key length (in bits), the greater the keyspace (the range of possible key values doubles for every ‹bit› added).
A brute force attack will on average require 50% of all possible keys to be guessed before the correct key is found. The keyspace is consequently used as a simple measure to describe the strength of the cryptosystem. A 64 bit keyspace is no longer considered sufficient to defeat a brute force attack. A 128 bit keyspace is often considered to be the requirement.
Kilgetty is a group of products designed by CESG to provide protection for UK government data stored on PCs.
Kilgetty is a whole disk encryptor. It was developed following the theft of a laptop, containing the Allies› Gulf War Plans, at the height of the offensive. Kilgetty, and Kilgetty Plus, were designed by the CESG based at Cheltenham, and developed by Serco in Gloucester. The products have been certified for government use.
As a result, Kilgetty officially provides protection to CONFIDENTIAL level and Kilgetty Plus to TOP SECRET provided ‹physical protection› is used as set out in the Kilgetty Handling Guidance supplied with the product. Without this ‹physical protection› the limits are RESTRICTED and CONFIDENTIAL respectively.
Kilgetty has not so far been a successful product. Take-up has been low, even within the MoD for whom it was designed. Relatively few copies have been sold despite the strong recommendations by CESG for its use in Government circles. It has been suggested that the main reason for this is that the design is from a security purist’s viewpoint, with scant regard for real life usability – Kilgetty has a reputation as being very user unfriendly.
Klez is the name of a family of viruses.
* Klez-A was detected by AV company Sophos in October 2001.
* Klez-E and Klez-F were detected in January 2002. These appear to be the most destructive variants. On the 6th of March, May, September and November they attempt to overwrite files on all drives which have any of a range of specific extensions. On the 6th January and July they attempt to overwrite all files on all drives. (Note that overwriting files is more destructive than simply deleting them — it makes it much more difficult and expensive to recover the files.)
* Klez-H was detected in March 2002, and rapidly became the most common virus in circulation. It hasn’t caused any viral storms like Loveletter and Code Red, nor is it as destructive as Klez-E and F, but it is remarkably persistent. It is already described as being one of the top three viruses in all time judged by the number of interceptions by AV vendors — and it is, at the time of writing, still going strong.
Klez-H includes a number of features that explain its success.
* It exploits a vulnerability in some versions of Microsoft Outlook and Outlook Express, and Internet Explorer. This causes an attached executable file to run automatically without being double-clicked. The vulnerability has long been known, and Microsoft has issued a patch (http://www.microsoft.com/technet/security/bulletin/MS01-027.asp), but it is clear that many business and especially home users have not updated their versions of Outlook.
* It includes its own SMTP engine, and it searches the infected users› Address Book for new targets. It is then able to spoof the mail header as it remails itself. Recipients get a mail from a friend or colleague, and are consequently more likely to trust it. This also makes it more difficult to track down the infected originator, since the mail did not come from the address in the ‹return› field of the mail header, and the apparent sender is (probably) not infected.
* It compiles a wide range of possible e-mail subjects and messages from an extensive list of internal variables. Thus, recipients who may have heard that the message “Hi, a special good website” probably carries Klez may not be aware that “Hello, a very funny game”, “so cool a flash,enjoy it”, and “japanese lass› sexy pictures” are just a variants.
* It attempts to delete a range of AV products that may be installed on the infected PC
Despite its success, and despite its combination of various ‹tricks›, Klez is not technologically innovative. If users had updated their AV defenses or patched their version of Outlook, it would be defeated. That it continues to spread demonstrates that most users still do not take basic precautions.
It is important always that users update their software with the latest vendor patches, and maintain their AV software. A number of websites offer free PC virus scanning.
An attack against (attempt to decrypt) ciphertext when some of the plaintext associated with the ciphertext is already known.
“It is surprisingly reasonable that The Opponent might well have some known plaintext (and related ciphertext): This might be the return address on a letter, a known report, or even some suspected words. Sometimes the cryptosystem will carry unauthorized messages like birthday greetings which are then exposed, due to their apparently innocuous content.”
Ritter’s Crypto Glossary and Dictionary of Technical Cryptography
L0pht Heavy Industries
“A Boston-based group of hackers interested in free information distribution, finding alternatives to the Internet and testing the security of various products. Their web site houses the archives of the Whacked Mac Archives, Black Crawling Systems, Dr. Who’s Radiophone, the Cult of the Dead Cow, and others. Current membership includes Mudge, Space Rogue, Brian Oblivion, Kingpin, Weld Pond, Tan, Stefan von Neumann and Megan A. Haquer. They can be reached at email@example.com and maintain a web site at http://www.l0pht.com.”
Hacker’s Encyclopedia, by Logik Bomb (FOA), http://www.xmission.com/~ryder/hack.html, (1997- Revised Second Edition)
L0pht members are generally considered to be white hat hackers rather than black hat crackers.
L0phtCrack is the de facto standard NT password auditing tool for U.S. industry, government and military. L0phtcrack recovers passwords from Windows NT registries or network sniffer logs in a variety of fashions, including exhaustive keyspace attacks. http://www.l0pht.com/l0phtsecurity.htm?s=326l
The marking of an item of information that reflects its information security classification.
An internal label is the marking of an item of information that reflects the classification of that item within the confines of the medium containing the information.
An external label is a visible or readable marking on the outside of the medium or its cover that reflects the security classification information resident within that particular medium.
Labour’s pre-1997-election crypto policy (UK)
Before the General Election that brought the Labour Party to Government in 1997, it had a clear crypto policy set out in ?Communicating Britains Future? (1995). This policy was effectively part of the Labour manifesto and was published on the Labour Party website.
Subsequent to achieving power, however, this policy was withdrawn from the site. As a matter of historical record, the crypto relevant section is quoted here:
‹We do not accept the ‹clipper chip› argument developed in the United States for the authorities to be able to swoop down on any encrypted message at will and unscramble it. The only power we would wish to give to the authorities, in order to pursue a defined legitimate anti-criminal purpose, would be to enable decryption to be demanded under judicial warrant (in the same way that a warrant is required in order to search someone’s home). Attempts to control the use of encryption technology are wrong in principle, unworkable in practice, and damaging to the long-term economic value of the information networks. Furthermore, the rate of change of technology and the ease with which ideas or computer software can be disseminated over the Internet and other networks make technical solutions unworkable. Adequate controls can be put in place based around current laws covering search and seizure and the disclosure of information. It is not necessary to criminalise a large section of the network-using public to control the activities of a very small minority of law-breakers›
Contrast this with the Regulation of Investigatory Powers Act (RIPA) brought in by the subsequent Labour Government.
‹Latent› means concealed or dormant; existing but not manifest. It has two particular connotations within IT.
In communications, latency is the time occurring between initiating a request for data and the beginning of the actual data transfer. On a disk, for example, latency is the time it takes for the selected sector to be positioned under the read/write head. On the Internet, it is the time between entering the URL in the client, and the server beginning its response.
In computer security, latency has relevance to malware – it is the period between infection and the first obvious damage to the system. Many viruses written by amateurs have a short latency and are detected before they spread very far. However, more damaging viruses can lie dormant, or replicate to many other hosts, before the payload is activated.
Layer 2 Tunneling Protocol (L2TP)
L2TP is an Internet client-server protocol that combines aspects of PPTP and L2F and supports tunneling of PPP over an IP network or other switched network. It is used to provide an encrypted link (tunnel) over a public network, providing the security for a VPN.
L2TP itself does not specify security services; it depends on protocols layered above and below it to provide any needed security.
L2TP is described in RFC 2661: Layer Two Tunneling Protocol “L2TP”.
The principle of using multiple overlapping security applications to provide greater security in depth (for example, combiing the use of a firewall, an IDS, anti-virus software and content security).
Lightweight Directory Access Protocol is the evolving standard for access to stored data over the Internet. It was originally developed by Netscape and the University of Michigan as a ‘Lite’ version of DAP (X.500 directory services). It is ideal for the storage and retrieval of items such as e-mail addresses, and is now a de facto standard for the secure storage and retrieval of public keys and certificates.
Use of user ID and password information obtained illicitly from one host to compromise another host. The act of TELNETing through one or more hosts in order to prevent the source of the attack being traced (which is a standard cracker procedure).
The principle that users and processes should operate with no more privileges than are absolutely necessary to perform the duties of their current role or function. The application of this principle limits the potential damage that could occur from a breach or failure in the security.
Two things are increasing dramatically: use of the Internet, and legal requirements on computer users. Within the latter there is a general trend towards placing responsibility as much on the computer ‹owner› (often the company directors) as on the computer user. The combined effect is to make it important that companies make liberal use of legal disclaimers.
The two most important are:
* e-mail disclaimer: specifying that the e-mail contents are the views and opinions of the author and not the company
* website access policy: if you have a restricted website, it is important that you include a ‹restricted› notice on the home page. While this won’t deter hackers, it gives you a clear legal case against them.
While it is good advice to use such disclaimers, much legal opinion questions their true legal effectiveness.
A piece of email containing live data intended to do malicious things to the recipient’s machine or terminal. Under UNIX , a letterbomb can also try to get part of its content interpreted as a shell command to the mailer. The results of this could range from amusing to denial of service.
A content security element which analyses data streams for pre-defined key words and phrases, enabling organisations to eliminate threats such as leakage of confidential information via email, the spread of libellous comments, harassment and discrimination, or to prevent access to Websites deemed unsuitable in an organisation’s security policy. A search for a phrase such as “over 18” is often more effective in blocking access to offensive Websites than relying on lists of URLs known to contain adult material.
from Content Technologies› Guide to Content Security
These are files containing groups of often-used computer code which can be shared amongst many programs. This has several advantages: programmers who use library code do not need to keep reinventing the wheel; programs which invoke library code do not each need to include a copy of that code, making their files smaller; updates to library code can be applied in one place, rather than in many programs. In Windows systems, the commonest form of library is the Dynamic Link Library (DLL).
There are certain disadvantages to DLLs, too. A bug in library code produces a bug in every program which uses that library. A single missing library file might prevent dozens of other programs from executing. A virus which infects a DLL automatically “infects” any program which uses that DLL.
One virus which does this is W32/Ska-Happy99, which infects the file WSOCK32.DLL, allowing the virus to intercept all future attempts to send email (independently of the email software you are using). The virus then emails a copy of itself along with every email you send.
from Sophos› V-Files
A resident computer program that triggers the perpetration of an unauthorized act when particular states of the system are realized. For example, a logic bomb could remain hidden and dormant until December 25th, and then delete all or specified files.
Luhn Check Digit Algorithm
The algorithm used to check the validity of bank/credit card numbers. Double every odd number and reduce to a single digit by subtracting 9 if necessary. Add these values to every even number. The result must be a multiple of 10 for the card to be valid.
Mobile commerce – a subset of electronic commerce that specifically refers to transactions undertaken via mobile wireless devices such as mobile phones, PDAs and laptop computers.
M-commerce, generally speaking, creates more security concerns than traditional e-commerce. Apart from the obvious dangers of theft and loss, you need also to consider the two problems identified as ‹the encryption gap› (the likelihood that encrypted data will appear as plaintext at the WAP Gateway), and the ‹policy gap› (the tendency for corporate security policies to be ignored by the mobile workforce outside of the office).
M-commerce security should pay particular attention to:
* authenticate the remote user with at least 2-factor access control (eg, password and token or biometrics)
* protect the laptop with a personal firewall
* use anti-virus software
* encrypt all data stored on the mobile device
* install a VPN for communication with the corporate network
* impose a strict security policy and make non-compliance a disciplinary offence
* instigate a continuous staff training programme teaching good security practice.
A macro virus is similar to a standard virus in all but its delivery. Rather than being code written in a programming language and attached to an executable, it is code written in a macro language and attached to a document.
A macro virus can thus be associated with any application that has its own macro language. Needless to say, the more powerful the macro language, the greater the potential for dangerous macro viruses.
The majority of macro viruses are MS Word viruses. This is because
* Word’s macro language is very powerful, versatile and easy to use
* Word is multi-platform and extensively used
This creates a large and easily exploited ‹target› for the virus writers. One of the problems is that the majority of Internet users are either too trusting or too lazy to take some very simple precautions.
There are two things you should remember about Word viruses:
* the virus is located in the template attached to the document, not the document itself.
* receiving an e-mail with an infected document attached does not in itself infect you. You can only do this if you open the document in Word itself.
This gives you several defensive options. Firstly, you could adopt a security policy of not accepting any e-mail attachments. If you receive one, you simply delete the document without opening it.
A second option would be to open attached documents only with Microsoft’s free Word viewer. This allows you to look at the contents of a Word document without activating any associated macros. In this way you can read the contents safely while deciding what to do with the document itself.
However, whatever else you do, it is important to re-inforce your security policy with the use of a mainstream anti-virus product.
Magic Lantern is the name given to a ‹virus› supposedly developed by the FBI. The intention would be to plant a Trojan Horse on the target’s PC that would then log encryption keys and report back to the FBI.
Although the FBI apparently confirmed the existence of Magic Lantern as “a workbench project”, it has received much sceptism within the industry. Non-US anti-virus vendors have stated that they would not treat Magic Lantern as any different to any other virus — and would consequently detect and remove it. We may assume, however, that US-based AV suppliers would come under severe pressure from the FBI to co-operate.
Many others suggest that, if such software or project to develop such software actually exists, it is more likely to be a remote administration tool in the vein of Back Orifice than a virus. However, BadTrans is a virus that logged data and sent it to a remote website – and we know that the FBI requested access to that data. There is no doubt that LEAs have the technology to trawl through vast amounts of data looking for a lucky find (see Echelon).
‹Mail bomb› is generally used as a verb rather than a noun. It is the act of, or even incitement to, send massive amounts of probably meaningless text to a particular e-mail address. The purpose is to annoy the recipient, or even crash his/her system – and it is usually done in retaliation for some real or perceived offense.
Mail bombing is not considered to be a reasonable action since it is impossible to guarantee that any inconvenience is limited solely to the target.
A mailing list is an automated e-mail distribution mechanism for a defined subject (the list topic) to a registered readership (a list of e-mail addresses). Often just called ‹lists›, there are innumerable mailing lists on the Internet catering for every subject under the sun.
Lists are controlled by the list ‹owner› – usually the person or organisation that set it up, and probably the owner of the server on which it is run. People registered on the list are known as list ‹members›.
Basically, if a list member has something interesting to say or ask, he or she sends an e-mail message to the list. The message is then copied to every other member of the list – each of whom may respond or not in the same way. Mailing lists thus become important methods of circulating information across a geographically dispersed but topically constrained readership.
There are several ‹Lists› that all security professionals should join. Rather than specify them here, we strongly recommend everyone to search the Internet looking for ‹security mail list›, and to join the more pertinent lists.
Lists may be moderated (that is, the list owner will specifically allow or disallow each message), or unmoderated (that is, no censorship is used, and all messages are allowed automatically). A list member who never posts messages to the list but just ‹listens in› to learn about the subject matter is called a ‹lurker› (which is not generally considered to be a disparaging term).
Malware is the generic term for software that is designed to do harm – a contraction of ‹malicious software›. It is not yet in universal usage, but its popularity as a general term for viruses, Trojan Horses, worms, and malicious mobile code is growing.
A man-in-the-middle attack is possible because of the nature of the Internet and the popularity of public key crypto. It involves intercepting the messages and masquerading as the parties concerned.
Theoretically, the man-in-the-middle (MITM) could block a message from Alice to Bob. He could then masquerade as Bob to Alice, and as Alice to Bob. Without proper authentication (see digital certificates) this could be possible. MITM could receive and read messages from Alice before sending them on the Bob (and vice versa).
If MITM never alters the messages, Alice and Bob may never know that their messages are being intercepted.
Managed Security Services – MSS
MSS is effectively ‹outsourcing› your security management. The purpose is to allow businesses to concentrate on their business, while security professionals concentrate on the security.
It is a strong argument. Information security is such a complex and wide ranging subject that it is unlikely that anybody can become a security expert without being full time and highly qualified. Such staff are expensive.
Unless a company is large enough to warrant and be able to afford several in-house security experts, it is likely that some degree of managed services would prove cost-effective.
Except for the smaller companies.
Mandatory Access Control (MAC)
A mechanism that enforces the corporate policy or security rules that deal with the sharing of data. This is done by comparing the sensitivity of the resource (eg, file or storage device) with the clearance of the entity (eg, user or application). Such rules could include “only members of the accounts department may read or change payroll data”, and “classified data may only be accessed by staff with a ‹classified› clearance level”.
The TCSEC B1 level and above requires that computer systems to be able to enforce MAC as well as DAC.
Designed by Ronald Rivest and released in 1990, MD4 is a cryptographic hash algorithm that calculates a 128-bit number.
Weaknesses have been discovered in MD4, and it should no longer be used.
Research has indicated that MD5 cannot guarantee to generate collision free outputs. Furthermore, current thinking suggests that a hash function should be at least 160-bits in order to provide resistance to a birthday attack.
MD5 is no longer recommended.
Media Access Control
In general, the MAC address is the hardware address of a device connected to a shared media. More specifically, it usually refers to a unique hardware number on every single Ethernet card. The MAC address is used by the Media Access Control sublayer of the Data-Link Control (DLC) layer of telecommunication protocols.
Melissa is the name of a very successful Word macro virus that struck in March 1999. The name “Melissa” comes from the class module that contains the virus. The name is also used in the registry flag set by the virus. It was originally posted to the alt.sex newsgroup, from where it spread far and fast.
The original Melissa virus did not contain a destructive payload, although text could be added to the active document when it was triggered (although it did, of course, cause substantial cost through what effectively became a denial of service attack on the Internet). But it demonstrated a new method of virus distribution (in fact, the ShareFun virus released two years earlier in February 1997 had already demonstrated the principles embodied in Melissa); and subsequent Melissa-variants were very destructive. W97M.Melissa.U variant, for example, deletes the following files when the infected file is opened making the system un-bootable:
CIAC subsequently commented (K-031: Mobile Malicious Code): “In addition to the standard file infection mechanism, the Melissa virus opens a connection to the Outlook 98 e-mail program and begins sending e-mail messages in your name to people in your address books. Attached to each e-mail message is an infected Word document with the note: “Here is that document you asked for … don’t show anyone else ;-)”. Here it is using social engineering by masquerading as you to get individuals in your address book to execute the infected document.”
Melissa was, in fact, the first seriously effective macro virus to combine social engineering with the infected system’s address book in order to propogate itself. In April 1999 the author of this glossary wrote:
“I say Melissa has been a Good Thing. Certainly this won’t go down well with all the sys admins who have had to clear up the mess, nor all the Finance Directors who have to pay for it. And I have every sympathy for them.
But let’s not forget that Melissa contains no data damaging payload. The damage has come from its distribution methodology. The damage has been caused by mail-bombing rather than destruction – denial of service through overloaded servers rather than loss of data. And it could easily have been different.
There’s no doubt that Melissa is a clever little thing. But there is equally no doubt that a serious Black Hat hacker or subversive group could have made it much worse. It was its very success that gave it away. By mailing itself to the first 50 addresses in the Address Book it became a cyber chainletter spreading at cyber speeds. One cycle would produce 50 spawnings, two cycles would generate 2500 spawnings, three would lead to 125,000, while just five cycles could potentially infect 312,500,000 computers – and more wherever the Address Book contains lists rather than individual addresses. Think of the damage that could have been done with a malicious payload. Think of the havoc that could have been wrought on individual companies if a malicious payload had been coupled with a mail address targeting method.
Well, if the Melissa perpetrator hadn’t done it now and alerted us with a relatively benign virus, serious damage would just be waiting to happen. Now, however, we have been alerted to the potential, and provided we learn from this and recognise mistakes, then we are in a stronger position to defend ourselves in the future.”
This author was wrong. The subsequent history of similar damaging viruses proved that the world’s user population learnt very little from Melissa. While it was estimated that Melissa cost industry $80M in the USA, a year later the Love Bug virus caused an estimated $10Billion damage.
Message Authentication Code
Message Authentication Code, capitalized, is the ANSI standard DES-based checksum also known as the U.S. Government standard Data Authentication Code. It is a keyed hash function equivalent to DES cipher block chaining with IV = 0.
Use of the term ‹message authentication code› uncapitalized to refer to any other checksum is frowned upon within RFC2828.
A ‹message digest› is a commonly used phrase to describe the ‹hash result‹ obtained as part of the process of providing secure messaging. Note however, that RFC2828 deprecates this usage “because it unnecessarily duplicates the meaning of the other, more general term [‹hash result›] and mixes concepts in a potentially misleading way.”
A ‹message digest› is a commonly used phrase to describe the ‹hash result‹ obtained as part of the process of providing secure messaging. Note however, that RFC2828 deprecates this usage “because it unnecessarily duplicates the meaning of the other, more general term [‹hash result›] and mixes concepts in a potentially misleading way.”
Mirror image backup
A mirror image backup (also known as a bit-stream backup) is an exact copy of the entire disc. It is usually used for forensic purposes and includes all the hidden ambient data as well as the visible applications and archive information.
Forensics is not as simple as it may seem, so taking a mirror image backup is best left to specialists in order to avoid compromising the very evidence you seek to obtain.
Mitnick, Kevin David
Kevin Mitnick, a US citizen born in 1963, is probably the best known, and possibly least understood, computer hacker. On balance he is probably best described as a cracker – but it is unlikely that he was ever as black as US law enforcement has painted him. The problem is that his notoriety, hunt, capture and prosecution all bring together a number of different motives: a security consultant whose fees can be proportional to his fame; a journalist who obtained a lucrative book contract; and a government with its own agenda for maximizing the danger in an uncontrolled Internet – and with a clear desire to make an example of Mitnick.
It all started at the beginning of the ’90s. Mitnick was a little known hacker – and might well have remained so had not Katie Hafner and John Markoff written the book Cyberpunk in which he was described as ‹a darkside hacker›. Mitnick himself described the book as inaccurate: “I am sad to report that part one of the book Cyberpunk, specifically the chapters on ‹Kevin: The Dark Side Hacker›, is 20 percent fabricated and libelous.” Katie Hafner is reported to have later admitted “It might have been a mistake to call him a darkside hacker,” but watch how the name Markoff continues to crop up through the next decade. Markoff effectively made a career out of the Mitnick saga…
Mitnick was first arrested at age 17. There were rumors (probably false) that he had cracked NORAD and inspired the WarGames film. He was also convicted by Federal authorities in 1989 in Los Angeles for stealing software and breaking into corporate networks. He received a one-year sentence in that case. He subsequently violated his probation terms and effectively went ‹on the run› in 1992.
This is still not world shattering news. But in 1994 Markoff wrote a New York Times frontpage story headlined “Cyberspace’s Most Wanted: Hacker Eludes F.B.I. Pursuit”. At the very least it seems strange that a journalist with a book about a particular hacker should write a frontpage story about that hacker when more destructive hackers were ignored. Charles Platt subsequently commented: “So far as I can discover, the FBI didn’t classify Mitnick as one of America’s most wanted; it was John Markoff who chose to apply that label. Markoff went far beyond the traditional function of a journalist who merely reports news; he helped to create a character, and the character himself became the news.”
What we don’t know is whether Mitnick would have remained in obscurity were it not for Markoff. Now, however, he was a cause célèbre – lauded by the hacker community (within which he had long been famed anyway), and an embarrassment to law enforcement. Things came to a head when Mitnick hacked into the personal computers of security consultant Tsutomu Shimomura on Christmas Day, 1994.
The ‹hack› was a sophisticated one – using a technique known but not widely used nor easy to achieve: IP Spoofing. Mitnick ‹stole› some files and lodged them on the Internet where they would undoubtedly be found, under the label ‹japboy›.
While the hack of a security consultant is the stuff of urban folk law – the subsequent degeneration into pettiness is sad. It was on both sides. Shimomura declared that Mitnick was “really more of a con man or a grifter than a hacker in the true sense of the word” and that he “didn’t seem to be as brilliant a hacker as his legend claimed…”
The truth, however, is that however likable or dislikeable either one is, they are both very clever men. And Shimomura ‹won›. Within a couple of months he had helped the FBI locate Mitnick through his use of the telephone system to access the Internet.
Mitnick was arrested by the FBI in Raleigh, North Carolina, on February 15th, 1995. He was charged with 23 counts of access device fraud. In order to expedite his return to California, he agreed to plead guilty to one count and have his case consolidated in Los Angeles. In California, he was charged with an additional 25 counts of access device, wire, and computer fraud. On March 16, 1999, Mitnick pleaded guilty to five of these counts and two additional counts from the Northern District of California. He was sentenced to 46 months and three years probation. He was released on January 21, 2000, being eligible for early release after serving almost 60 months of his total 68 month sentence.
Now he has to be very careful about what he says and what he does. In an article for Time (Feb 21, 2000) he writes “If I could talk with the people carrying out these disruptions, I’d…”, and “If the terms of my release permitted me to do so, I’d…” The article ends: “The opinions expressed here are for informational purposes only and should not be construed as technical advice of any kind.”
But is he now a lackey of the State? “With history as our guide, we can expect that the government will use this event to push through legislation authorizing digital wiretapping without court orders, to outlaw encryption that the government cannot crack and to track the location of cell-phone users without their knowledge. They’ll push laws that eliminate individual rights in exchange for more government “protection” against cybercrime.”
Mitnick is the stuff of legend.
Moving Picture Experts Group Audio Layer 3 File
MP3 files contain compressed audio tracks. Like JPEG (see JPEG), the amount of compression can be increased by reducing the quality of the sound track when it is replayed. Because very high compression can be achieved without excessive loss in quality, MP3 audio is very popular on the Internet.
As with JPEG, MP3 files are not programs, and cannot be infected with a virus. A hoax exists about an MP3 ‹virus›, in similar vein to the hoax about JPEG.
from Sophos› V-Files
Multipurpose Internet Mail Extension (MIME)
An electronic mail protocol that allows users to attach binary files to e-mail messages. Most mail packages support the MIME protocol.
It is defined for e-mail use in RFC1521 and 1522, and has been extended by other RFCs for use in applications.
Murphy’s Law/Sod’s Law
Anything that can go wrong, will go wrong.
Supposedly a law of nature, a truth spake in jest, it is a maxim that all systems designers, and especially security designers, should constantly bear in mind.
National Computer Security Center (NCSC)
In the USA, the government agency part of the National Security Agency (NSA) and that produces technical reference materials relating to a wide variety of computer security areas.
National Information Infrastructure (NII)
In the US, a concept conceived by the Clinton Administration together with an alliance of computer, software, cable, and phone companies. The concept provides for the electronic network of tomorrow, using phone lines, cable systems, and high-speed data networks to link everyone from government agencies, universities, company presidents, and private citizens. Vast amounts of services, entertainment, and information would be available through computers, televisions, and telephones.
In the UK, the National Criminal Intelligence Service: the nearest equivalent the UK has to the FBI.
Just how equivalent became apparent with the “NCIS SUBMISSION ON COMMUNICATIONS DATA RETENTION LAW” dated 21st August 2000. This document requested that copies of all communications should be retained and then archived for a total of seven years:
4. WHAT TYPE OF DATA SHOULD BE RETAINED?
4.1 All communications data generated in the course of a CSP’s business or routed through their network or servers, involving both Internet and telephone services, within a widely interpreted definition of “communications data” as proposed in the draft provisions of Clause 20, Part 1, Chapter II, Regulation of Investigatory Powers Act. [5.1.1]
4.2 Legislation should require every CSP to retain all communications data originating or terminating in the UK, or routed through the UK networks, including any such data that is stored offshore. [5.1.1]
But even more scary is the route through which it will pass into British law via a European Directive – that is, without any UK accountability and no parliamentary vote. In June 2002 a European Directive ‹allows› EU nations to require that ISPs collect and keep detailed logs of each customer’s traffic, so that LEAs can access it later. The data to be gathered would include the headers (from, to, cc and subject lines) of every e-mail every customer sends or receives, and every user’s complete Web browsing history.
NCIS epitomizes the manner in which LEAs are progressing from criminal investigation (investigation of crimes that have been committed), to criminal intelligence (investigation of the people considered likely to commit the crimes). This can only be achieved by free access to all communications between everybody; which is the Holy Grail of LEAs.
See also: Data Retention
During 2004 the Home Office set about establishing a new agency, The Serious Organised Crime Agency. The relevant strategic plan states that by 2006 The National Crime Squad, NCIS and the Immigration and Nationality Directorate enforcement and intelligence services will have been «incorporated wholly or partially into SOCA».
A principle by which information is only provided to those with a legitimate need for that information – similar in concept and effect to the principle of least privilege.
Network Address Translation (NAT)
Network address translation is a procedure by which the firewall can alter the data in packets to change the network address. This can serve several purposes:
* it allows a large number of internal hosts to connect to the Internet using a small number of IP addresses (thus conserving a limited resource)
* it can help enforce the firewall’s control over outbound traffic since the internal host will need to obtain a valid address from the firewall before its connection to the Internet will work
* it prevents the true IP addresses from reaching the Internet; thus providing a small degree of additional security
* dynamic NAT can provide additional defence against incoming attacks since the attacker has to work within the time constraints of the current session.
NFS and NIS attacks
Network File System (NFS) is a protocol designed by Sun Microsystems that allows a computer on a network to use the files and peripherals of another networked computer as if they were local.
NFS allows systems to share files over a network by letting a client mount a disk on a remote server machine. NIS maintains a distributed database of password tables, group files, host tables, and other information that systems on a network can share. These services should always be protected by a firewall.
Without protection, an attacker may be able to NFS-mount your filesystems. Client machines are allowed to read and change files stored on the server without logging into the server or entering a password – and since NFS doesn’t log transactions, you may not even know about it.
An attacker who can guess the name of your NIS domain and can send a NIS request to your NIS server can get a copy of your password information (including encrypted passwords), even if you are running shadow passwords and the passwords are not in the /etc/passwd file. The attacker is then free to crack your passwords at leisure.
In the USA, the National Institute of Standards and Technology, an agency of the Department of Commerce.
NIST was created in 1901 as the National Bureau of Standards and renamed in 1988 as the National Institute of Standards and Technology. It is the United States government agency that provides assistance in developing standards. It has primary Government responsibility for IT security standards for sensitive but unclassified information.
Non-repudiation is an attribute of communications that seeks to prevent future false denial of involvement by either party. Non-repudiation with proof of origin provides the recipient of data with evidence that proves the origin of the data. Non-repudiation with proof of receipt provides the originator of data with evidence that proves the data was received as addressed. Non-repudiation is consequently an essential element of trust in e-business.
It is claimed that PKI can provide non-repudiation – but this is true only to a point. In scientific terms, PKI (that is, digital certificates and digital signatures) can prove which device originated a message – it cannot prove who was operating that device. So, strictly speaking, PKI can provide evidence of likelihood rather than absolute proof. To what extent this becomes accepted will ultimately be proven by legal case law.
One point does seem likely, however. If PKI is accepted to provide legally binding non-repudiation, it will mean that digital signatures are irrevocable. If I manage to ‹steal› your digital signature, you will be absolutely bound by any contract I enter with your signature. In short, where liability for, say, credit card fraud often now lies with the banks, in the future electronic world it will lie with YOU.
When considering the safety of digital certificates consider that in early 2001 Verisign ‹gave› two Microsoft certificates to unknown parties who simply claimed that they were Microsoft employees. In theory this means that with these certificates the unknown parties can offer software that does who knows what (that is, untrustworthy software) as if it were from Microsoft (that is, from a trustworthy source).
Word Global Template
For most users, the file NORMAL.DOT is the global template for Microsoft Word. Anything that you store in the global template effectively becomes a built-in part of your installation of Word. This is not the only way of adding features to Word, but it is the best-known and the most common.
NORMAL allows you to store all kinds of customisations, including keystroke shortcuts, text styles, toolbars and macros. Since Word macros are basically programs (see DOC), adding macros to NORMAL allows you to change the programmatic behaviour of Word altogether. In many ways, NORMAL is to Word as AUTOEXEC.BAT is to Windows. It contains macros that are always loaded when Word is started.
This means that a virus which infects the global template is effectively memory resident inside Word, and will automatically be reloaded every time Word is started. Worse still, a virus does not have to write itself explicitly into the NORMAL file. All it needs to do is to go resident, thereby altering the global template. When Word exits, it will detect these changes, and automatically save them back into NORMAL. Although you can ask Word to prompt you about such changes (which could give warning of a virus infection), it takes only one line of program code to switch this warning off. Most viruses do this.
Some people still believe that write-protecting NORMAL prevents virus infection. This is false — it merely stops Word automatically updating (and infecting) NORMAL when you exit. As long as Word is loaded, any resident virus will remain resident and active. Write-protecting NORMAL may slow down the spread of a virus, because the virus is not automatically reloaded every time Word starts. But as a protective measure it is of very limited use.
from Sophos› V-Files
National Security Agency (USA)
A secretive US agency with wide ranging responsibility, particularly concerned with Communications Intelligence (COMINT), for US security. For many years its very existence was denied – and it was colloquially termed NSA – No Such Agency.
It was formed, however, as long ago as 1952 by President Truman. The following, which demonstrates the range of NSA’s responsibilites and authority, is taken from Truman’s original memorandum, declassified in 1998:
“… 2. A directive to the Secretary of Defense. This directive shall include the following provisions:
* Subject to the specific provisions of this directive, the Secretary of Defense may delegate in whole or in part authority over the Director of NSA within his department as he sees fit.
* The COMINT mission of the National Security Agency (NSA) shall be to provide an effective, unified organization and control of the communications intelligence activities of the United States conducted against foreign governments, to provide for integrated operational policies and procedures pertaining thereto. As used in this directive, the terms «communications intelligence» or «COMINT» shall be construed to mean all procedures and methods used in the interception of communications other than foreign press and propaganda broadcasts and the obtaining of information from such communications by other than the intended recipients,* but shall exclude censorship and the production and dissemination of finished intelligence.
* NSA shall be administered by a Director, designated by the Secretary of Defense after consultation with the Joint Chiefs of Staff, who shall serve for a minimum term of 4 years and who shall be eligible for reappointment. The Director shall be a career commissioned officer of the armed services on active or reactivate status, and shall enjoy at least 3-star rank during the period of his incumbency.
* Under the Secretary of Defense, and in accordance with approved policies of USCIB, the Director of NSA shall be responsible for accomplishing the mission of NSA. For this purpose all COMINT collection and production resources of the United States are placed under his operational and technical control. When action by the Chiefs of the operating agencies of the Services and civilian departments or agencies is required, the Director shall normally issue instructions pertaining to COMINT operations through them. However, due to the unique technical character of COMINT operations, the Director is authorized to issue direct to any operating elements under his operational control task assignments any pertinent instructions which are within the capacity of such elements to accomplish. He shall also have direct access to, and direct communication with, any elements of the Service or civilian COMINT agencies or any other matters of operational and technical control as may be necessary, and he is authorized to obtain such information and intelligence material from them as he may require. All instructions issued by the Director under the authority provided in this paragraph shall be mandatory, subject only to appeal to the Secretary of Defense by the Chief of Service or head of civilian department or agency concerned.
* Specific responsibilities of the Director of NSA include the following:
* Formulating necessary operational plans and policies for the conduct of the U.S. COMINT activities.
* Conducting COMINT activities, including research and development, as required to meet the needs of the departments and agencies which are authorized to receive the products of COMINT.
* Determining, and submitting to appropriate authorities, requirements for logistic support for the conduct of COMINT activities, together with specific recommendations as to what each of the responsible departments and agencies of the Government should supply.
* Within NSA’s field of authorized operations prescribing requisite security regulations covering operating practices, including the transmission, handling and distribution of COMINT material within and among the COMINT elements under his operational or technical control; and exercising the necessary monitoring and supervisory control, including inspections if necessary, to ensure compliance with the regulations.
* Subject to the authorities granted the Director of Central Intelligence under NSCID No. 5, conducting all liaison on COMINT matters with foreign governmental communications intelligence agencies.
* To the extent he deems feasible and in consonance with the aims of maximum over-all efficiency, economy, and effectiveness, the Director shall centralize or consolidate the performance of COMINT functions for which he is responsible. It is recognized that in certain circumstances elements of the Armed Forces and other agencies being served will require close COMINT support. Where necessary for this close support, direct operational control of specified COMINT facilities and resources will be delegated by the Director, during such periods and for such tasks as are determined by him, to military commanders or to the Chiefs of other agencies supported.
* The Director shall exercise such administrative control over COMINT activities as he deems necessary to the effective performance of his mission. Otherwise, administrative control of personnel and facilities will remain with the departments and agencies providing them.
* The Director shall make provision for participation by repre- sentatives of each of the departments and agencies eligible to receive COMINT products in those offices of NSA where priorities of intercept and processing are finally planned.
* The Director shall have a civilian deputy whose primary responsibility shall be to ensure the mobilization and effective employment of the best available human and scientific resources in the field of cryptographic research and development.
* Nothing in this directive shall contravene the responsibilities of the individual departments and agencies for the final evaluation of COMINT information, its synthesis with information from other sources, and the dissemination of finished intelligence to users.
3. The special nature of COMINT activities requires that they be treated in all respects as being outside the framework of other or general intelligence activities. Orders, directives, policies, or recommendations of any authority of the Executive Branch relating to the collection, production, security, handling, dissemination, or utilization of intelligence, and/or classified material, shall not be applicable to COMINT activities, unless specifically so stated and issued by competent departmental or agency authority represented on the Board. Other National Security Council Intelligence Directives to the Director of Central Intelligence shall be construed as non-applicable to COMINT activities, unless the National Security Council has made its directive specifically applicable to COMINT.
At the end of August ’99, Andrew Fernandes, a Canadian cryptographer with a small consultancy called Cryptonym Corporation, was debugging one of his own programs following the release of NT4 Service Pack 5.
Now it has long been known that Windows has two crypto keys. The one obviously belongs to Microsoft itself, and is there to ensure Windows can load CryptoAPI services in conformance with US export laws. But what of the second?
Programmers know that windows performs the calculation
component_validity =crypto_verify([cryptographic key],crypto_component)
but they have not known exactly what the
has meant. Well, Fernandes found, by accident, that the MS programmer or programmers had forgotten to remove the symbolic label identifying the second key (standard practice for debugging purposes). And the name of this second key? NSAKEY.
On this simple fact has grown a mountain of speculation. You see, if this key is actually owned by the NSA, it would allow the NSA to subvert your security. The NSA could implant a Trojan to replace the module which performs the encryption on your box with one that doesn’t perform encryption, and a sniffer. This would be accepted by the NSA key – with the result that your network traffic would be in plaintext and sniffed by the trojaned NSA sniffer; and you wouldn’t even know it.
Microsoft, however, has strongly denied any such thing. «This report is inaccurate and unfounded. The key in question is a Microsoft key. It is maintained and safeguarded by Microsoft, and we have not shared this key with the NSA or any other party.» Microsoft said the key is labelled «NSA key» because NSA is the technical review authority for U.S. export controls, and the key ensures compliance with U.S. export laws.
These are similar to library files (see Library), though they cannot be loaded dynamically when a program is run. Instead, they are created when a programmer is building a piece of software as an intermediate step between the source code (written in a language such as C or C++) and the finished program.
Because most programs are constructed out of several source code files, the programmer usually runs a “compiler”, which converts each source file into an OBJ file, and then runs a “linker”, which gathers all the OBJ files together and builds them into a single program.
OBJ files are usually recreated many times whilst a program is under development, so they change frequently. This means a virus which could infect OBJs would be manipulating files that the programmer would expect to be changing, thus avoiding notice. And it would end up being Linked into any program which used that particular infected OBJ. Like libraries, OBJs may be shared between multiple programs, so an OBJ virus might find its way into several programs having infected only one file on your disk.
OBJ viruses exist, but are uncommon. They find it hard to spread because most users are not programmers, and do not have OBJ files on their computer. This means that OBJ viruses do not have a ready supply of victims.
from Sophos› V-Files
The reassignment to some subject of a medium (e.g., page frame, disk sector, or magnetic tape) that contained one or more objects. To be securely reassigned, no residual data from previously contained object(s) can be available to the new subject through standard system mechanisms.
Objects are passive entities between which information flows under the direction of active entities called subjects. Objects could thus include directories/folders, files, fields, screens, keyboards, memory, magnetic storage, printers and so on.
An object is effectively, therefore, data or a data container. It is effectively a system resource. Access to an object implies access to the data contained by the object.
A series of random numbers, which used to be written down (hence the name), but which is now more commonly supplied on floppy disk or CD-ROM.
The one-time Pad is used as a method of private key encryption, with both parties in a transaction (and no-one else) having copies of the same sequence of random numbers for use as the key. These are destroyed after use and never re-used.
Each character or digit in the transaction is encrypted by the addition modulo 26 of the plaintext character and the one-time pad key. This is the only encryption methodology that cannot be broken regardless of processing capacity and time available. But it has its problems:
* truly random numbers are difficult to generate
* since the key length is always equal to the plaintext length, vast quantities of the random number sequence must be generated, in turn leading to key management, distribution and synchronization problems
* one-time pads cannot provide any form of authenticity.
The one-time pad is therefore only realistic for high security, low traffic conditions.
An open relay is a mail server that does not limit its services to a closed list of registered users – it is open for use, or abuse, by anyone. An open relay doesn’t check the source IP of the sender, and does not append the IP to the ‹Received:› field of the mail header.
When you send an e-mail, your mail server (such as your ISP) forwards (or relays) your mail to the server of the intended recipient. Usually, it will only do so if it recognizes your e-mail address; that is, if you are already a registered user of the server/relay.
However, there are many mail servers connected to the Internet that perform no such checks. Some are there by design, some are there simply because their owners don’t realize that they are open.
If the world were a better place than it is, this would be no bad thing – it would allow anonymity when anonymity is sought. Unfortunately, this anonymity is sought today by spammers who use open relays for mass junk mailings (spam) with little chance of being discovered and stopped.
If you receive spam via an open relay there is virtually no chance of locating or dealing with the spammer without the help of the administrator of that open relay – and there is often little hope of this. For this reason there are several published databases of known open relay mail servers, and many ISPs set their servers to block, reject or at least highlight mail originating from an open relay.
Open Source Software – OSS
Open Source Software is software that is supplied with full source code. But it is much more. The key is in the Distribution License – this is what defines whether or not software is open source. There are many licenses that claim to be ‹open source› – but it is generally held that conformance to the nine guiding principles defined by the Open Source Initiative (OSI) are the test. These, in brief, are: Free Distribution:
* anyone can distribute or redistribute the software
* Inclusive Of Source Code: the program must include source code, and must allow distribution in source code as well as compiled form.
* Derived Works: the license must allow modifications and derived works, and must allow them to be distributed under the same terms as the license of the original software.
* Integrity Of Original Source Code: the integrity of the author’s original code may be maintained by requiring that that derived works must be distributed as a ‹patch› and carry a different name or version number from the original software.
* No Discrimination Against Persons or Groups: the license must not discriminate against any person or group of persons.
* No Discrimination Against Fields of Endeavor: the license may not restrict the manner in which the software is used
* Distribution of License: the rights attached to a program automatically apply to everyone to whom the program is redistributed
* License Must Not Be Specific to a Product: the rights attached to the program must not depend on the program’s being part of a particular software distribution.
* The License Must Not Restrict Other Software: the license must not impose restrictions on other software distributed with it.
The basic principle is that users must be provided with, or have easy access to, the source code of the software; and be allowed to modify that code for their own and other peoples› use. Of the various licenses available, the most popular is the Free Software Foundation’s (FSF) Gnu General Public License (usually just called the GPL) first developed by Richard Stallman.
There are many, many arguments in favor of the use of open source software. A few of the more important ones include:
* OSS is more secure
* OSS is more stable
* OSS is less costly
* OSS is more likely to conform to standards
* There is an immediacy to the use and modification of OSS
* There are few, if any, restrictions on the use of OSS
Nevertheless, some of the proprietary software manufacturers consider OSS to be less secure and a threat to intellectual property. There is a danger that Palladium could prove a threat to OSS in the future.
OSS is discussed in the whitepaper: The Strengths and Weaknesses of Open Source Software – And Its Role in the Security Model.
OpenSSL is an open source software (OSS) version of SSL. It is not strictly a product, but a combination of a cryptographic library and an SSL toolkit. It is derived from an earlier work, SSLeay, originally written by Eric A Young and Tim J Hudson in 1995. Development of SSLeay ceased in December 1998, and the first version of OpenSSL (0.9.1c) was released.
OpenSSL is the only full-featured version of SSL currently available for C and C++, and works across all major platforms. The source can be downloaded from . www.openssl.org.
The operating system is a Translator. It translates complex and ambiguous human intentions into the simple logical commands that the computer can understand, and obey.
The operating system is a Traffic Cop. It controls and directs the flow of data traffic between the hardware and software components of your computer.
The operating system is an Area Controller. It is responsible for how everything works within its area (the computer).
The operating system is a Customs Officer It has total control over what can pass from the human world to the computer world, and back again.
In short, the operating system is the life-blood of the computer. It is the environment within which all of your software applications, from games to accounts, from Internet browsing to word processing, operate. But it’s fickle, weak and not very loyal. Break it, and the computer won’t work. Injure it, and only the uninjured parts will work. Let it be captured, and it (and all of your software that runs within it) will obey the commands of its new owner…
And that’s what hackers try to do. They try to capture the operating system without you knowing about it – so that it will do their bidding whenever they want, without you knowing about it. They try to injure it with viruses and worms so that it no longer behaves in the way you expect.
If they succeed, your computer has been ‹compromised›; or in the vernacular, ‹hacked›, ‹rooted›, ‹owned› or ‹infected›. Nothing on the computer, unless it is encrypted, is safe: your confidential letters, credit card numbers, bank account details are all available to the hacker. Your very identity can be stolen.
That’s why you must defend the operating system with a firewall, an anti-virus system and an anti-spyware system. But you must also behave in a safe and secure manner – and above all, you must apply the first aid ‹patches› that the vendor will regularly make available. If you don’t look after your operating system, sooner or later you will be hacked, rooted, owned or infected.
Orange Book, The
One of the Rainbow Series, the Orange Book is an alternate name for DoD (US Department of Defense) Trusted Computer Security Evaluation Criteria (US DOD 5200.28-STD). It provides the information needed to classify computer systems as security levels of A, B, C, or D, defining the degree of trust that may be placed in them.
P3P: Platform for Privacy Preferences
The Platform for Privacy Preferences Project (P3P) is an emerging industry standard that enables web sites to express their privacy practices in a standardized format that can be automatically retrieved and interpreted by user agents. The goal is to help users be informed about web site practices by simplifying the process of reading privacy policies. With P3P, users need not read the privacy policies at every site they visit; instead, key information about what data is collected by a web site can be automatically conveyed to a user, and discrepencies between a site’s practices and the user’s preferences can be automatically flagged. The goal of P3P is to increase user trust and confidence in the Web.
Although P3P provides a technical mechanism for helping inform users about privacy policies before they release personal information, it does not provide a mechanism for ensuring sites act according to their policies. Products implementing the P3P specification may provide assistance in that regard, but that is up to specific implementations and beyond the scope of the specification. P3P is intended to be complementary to both legislative and self-regulatory programs that can help enforce web site policies. In addition, while P3P does not include mechanisms for transferring or securing personal data, it can be built into tools designed to facilitate data transfer.
P3P is an activity of the World Wide Web Consortium (W3C). For brevity, we often refer to activities, specifications, and products related to the Platform for Privacy Preferences Project as “P3P”.
from the P3P FAQ at www.w3.org/P3P/p3pfaq.html
In June 1999, Karen Coyle wrote an article called P3P: Pretty Poor Privacy? A Social Analysis of the Platform for Privacy Preferences (P3P). She commented: “Take an object with a familiar shape, add something symbolizing eyes, ears, nose and mouth and declare it to be almost human (Mr Potato Head). P3P is the software equivalent of Mr. Potato Head. It is an engineer’s vision of how humans decide who to trust and what to tell about themselves. It has a set of data elements and neatly interlocking parts, and it has nothing at all to do with how people establish trust.”
The basic building block for communications on the Internet, a packet is a block of data sent over a network. It includes the identities of the sending and receiving stations, error-control information, and a message.
Packet filtering (screening)
The process of controlling the flow of data from one network to another (usually a LAN and the Internet) based on a set of rules (security policy). It is usually performed by a router as part of a firewall. While a normal router decides where to direct the data, a packet filtering router decides whether to forward it at all.
Packet Filtering Firewall
A packet filtering router is a first generation firewall. Today, most commercial firewalls include packet filtering as a part of the overall firewall architecture.
Packet filtering analyzes network traffic at the transport protocol layer, but can only allow or deny passage of the packet. The rules or security policy that control this action are based on:
* the physical network interface that the packet arrives on
* the address the data is coming from (source IP address)
* the address the data is going to (destination IP address)
* the type of transport layer (TCP, UDP, ICMP)
* the transport layer source port
* the transport layer destination port
Packet filters are fast because they perform limited evaluations, and they do not require client computers to be specifically configured. However, they not considered to be very secure on their own because:
* They do not understand application layer protocols, and cannot restrict access to protocol subsets for even basic services (such as the PUT or GET commands in FTP). They are thus less secure than application layer and circuit level firewalls.
* They are stateless: they do not keep information about a session or application-derived information. (Since the ICMP protocol layer does not use port numbers for its communications protocol, it is difficult for packet filters to apply any security policy to this form of network traffic. In this instance, the packet filter would need to maintain state tables to ensure that an ICMP reply message was recently requested from an internal host. This ability to track communications state is one of the primary differences between simple packet filters and dynamic packet filters.)
* They have very limited abilities to manipulate information within a packet.
* Thay offer no value-added features, such as HTTP object caching, URL filtering, and authentication since they cannot understand the protocols being used and cannot tell one from another.
* They cannot restrict the information passed from internal computers to services on the firewall server.
* Intruders can potentially access the services on the firewall server.
* They have little or no audit event generation and alerting mechanisms, and it can be difficult to test «allow» and «deny» rules.
Packet Internet Gopher (PING)
A TCP/IP utility that is used to test the ‹reachability› of destinations by sending them an ICMP echo request and waiting for a reply. To ‹ping› a website is to use the utility to see whether that site is currently up; that is, connected to the Internet. When you ping a site, you get a response (a PONG) telling you the number of seconds it took to make the connection.
Ping of Death
“Palladium is Microsoft’s code name for an evolutionary set of features for the Windows operating system. Combined with a new breed of hardware and applications, these features will give individuals and groups of users greater data security, personal privacy, and system integrity.” John Manferdelli, the manager of the Microsoft unit developing Palladium. Palladium is the Microsoft operating system that will sit on top of Intel’s TCPA architecture.
This is spin. The real argument for TCPA/Palladium is the vendors› desire to build digital rights management into the heart of every computer. Software piracy will become much harder – and nobody can deny the vendors› right to try and limit their loss. But at what cost to the rest of the industry and law abiding users?
One argument suggests that Palladium is a direct attack against the Open Source Software industry (see Palladium and Open Source Software). Ross Anderson has a thought provoking FAQ on TCPA/Palladium. He concludes:
The question is: security for whom? The average user might prefer not to have to worry about viruses, but TCPA won’t fix that: viruses exploit the way software applications (such as Microsoft Office) use scripting. He might be worried about privacy, but TCPA won’t fix that; almost all privacy violations result from the abuse of authorised access, often obtained by coercing consent. If anything, by entrenching and expanding monopolies, TCPA will increase the incentives to price discriminate and thus to harvest personal data for profiling.
The most charitable view of TCPA is put forward by a Microsoft researcher: there are some applications in which you want to constrain the user’s actions. For example, you want to stop people fiddling with the odometer on a car before they sell it. Similarly, if you want to do DRM on a PC then you need to treat the user as the enemy.
Seen in these terms, TCPA and Palladium do not so much provide security for the user, but for the PC vendor, the software supplier, and the content industry. They do not add value for the user. Rather, they destroy it, by constraining what you can do with your PC – in order to enable application and service vendors to extract more money from you.
No doubt Palladium will be bundled with new features so that the package as a whole appears to add value in the short term, but the long-term economic, social and legal implications require serious thought.
The effect of taking security seriously.
* a property of an object, indicating a lack of logical or computational capability, and therefore an inability to change the information contained
* threats to the confidentiality of data (cf attack) that would not result in any unauthorized change to the state of the data or the systems (for example, monitoring and/or recording of data).
A passphrase serves a similar purpose to a password – except that it is longer and usually constructed from a number of different words. The idea is that it is easier for a human to remember a phrase than it is to remember a string of mixed uppercase and lowercase alphanumeric characters. It is, in theory, easier to remember
“mary had a little lamb”
than it is to remember
The passphrase is then generally hashed to a fixed length which is used as the password.
Theoretically, a passphrase is also more secure than a password. “If passwords resemble English text, then since each character contains only about 1.5 bits of entropy, a passphrase provides greater security through increased entropy than a short password.” – Handbook of Applied Cryptography, by A. Menezes, P. van Oorschot, and S. Vanstone, CRC Press, 1996.
The biggest drawback is that the extra length requires extra typing…
A security device consisting of a protected/private string of characters known only to the authorized user/s and the system. It is used to authenticate the authorised user of a computer or data file.
The importance of the password is often forgotten or ignored. Your password is like the lock on the front door. If it’s a strong lock, then you will easily stop all but the most determined and sophisticated intruders. But if it is a weak lock, then it will hardly delay, never mind stop, the intruder.
This begs the question – how do I come up with a good/strong password?
- do not use a password generator – generators use routines, and where there is a routine there is predictability
- do not use any word that could appear in a dictionary (either forwards, backwards, forwards and backwards, or with mixed upper and lower cases)
- use at least 8 characters
- change the password every month
- never write it down
Alec Muffett, the author of the password ‹cracking› program called Crack, has offered the following advice:
For instance, NEVER use passwords like:
alec7: it’s based on the users name (& it’s too short anyway)
tteffum: based on the users name again
gillian: girlfiends name (in a dictionary)
naillig: ditto, backwards
PORSCHE911: it’s in a dictionary
12345678: it’s in a dictionary (& people can watch you type it easily)
Computer: just because it’s capitalised doesn’t make it safe
wombat6: ditto for appending some random character
6wombat: ditto for prepending some random character
merde3: even for french words…
mr.spock: it’s in a sci-fi dictionary
zeolite: it’s in a geological dictionary
ze0lite: corrupted version of a word in a geological dictionary
I hope that these examples emphasise that ANY password derived from ANY dictionary word (or personal information), modified in ANY way, constitutes a potentially guessable password.
Password aging is the process of forcing a user to change (or maintain) his or her password after, or for, a specified period of time. In Unix it is effected by the inclusion of password aging data after the user’s password in the password field of the password file, separated from the password itself by a comma (,).
A password attack is an attempt to obtain or decrypt the legitimate user’s password key into the system. Readily available password dictionaries, cracking programs, and password sniffers combine to make passwords very vulnerable.
It is still surprisingly easy to obtain users› passwords. Very often they can be guessed. Very often they can be found (written on a postit stuck underneath the keyboard, for example). And very often they are insufficiently protected by the operating system itself.
There is no defense against password attacks other than using a strong password policy that includes a minimum length, unrecognizable words, and frequent changes.
The use of a sniffer to capture passwords as they pass across a network. The network could be a local area network, or the Internet itself. And the sniffer could be hardware (if the attacker has physical access to the network) or software (in which case all that is required is the ability to compromise a server). A favourite method for ‹installing› a password sniffer onto a local area network would be through the use of a Trojan Horse.
Once a LAN has been compromised, it is very difficult to detect the sniffer. The LAN is likely to be Ethernet – in which case the attacker ensures that the compromised server is placed into ‹promiscuous› mode (that is, able to receive all the packets on the network rather than those specifically addressed to it). When the sniffer sees a packet that fits certain criteria, it logs it to a file. The most common criteria for an interesting packet is one that contains words like “login” or “password”.
But the sniffer itself is passive. It doesn’t change anything: it just listens and logs, allowing the attacker to analyse the logs later. Since it doesn’t change anything, it is difficult to detect. But the log itself could grow very large – so the detection of such logs could demonstrate the existence of a sniffer.
During 1998 a password sniffing program was ‹delivered› in a shareware package that many users downloaded from the Internet. The apparently useful program acted as a Trojan Horse that also installed the sniffer. The sniffer then, apparently, ‹sniffed› many thousands of passwords and credit card numbers that it automatically and surreptitiously e-mailed back to the attacker.
There is no surefire defence against sniffers. Only constant vigilence – and encryption. You must ensure that your password never traverses a network unencrypted; and that it is frequently changed.
A patch is a band-aid produced by a software vendor to heal a wound or vulnerability in its software.
It is sad but true that the majority of these vulnerabilities are discovered by security researchers (white hats) and hackers (black hats) rather than the software vendor itself. It is also a fact of life that many vendors ignore warnings from the researchers until the researchers make the vulnerability public in order to force the vendor to do something!
This is not the place to debate the rights and wrongs of this sequence of events (seefull disclosurefor more details on the subject) but the effect is that you can inevitably assume that the hacker community is well aware of the vulnerability by the time that the vendor produces his patch. In other words, you only have a very short period of time within which to apply the patch before you can expect hackers to start trying to exploit the vulnerability.
So you must make sure that you apply all relevant patches as soon as they become available. This is critical.
Microsoft provides an automated patch service called Windows Update. Use it. It will help keep you secure by providing automatic downloads and installation of patches for both Windows and Internet Explorer.
And if you are not already a member, join the Home User WARP (seeherefor more about WARPs). This will provide you with free information on the latest vulnerabilities and patches for the software you use.
One of the problems with patches are that they can introduce new vulnerabilities themselves. This makes security managers sometimes reluctant to install them before they manage to test them themselves. Speaking at the Black Hat Security Briefings in July 2002, Richard Clarke (President Bush’s special adviser on cyberspace security), pointed to the chaos caused by the Nimda virus. He suggested that it wasn’t because the vulnerabilities were unknown, nor even that patches were not available — but because administrators had been reluctant to install the patches without testing them. The gist of his argument was that software vendors need to produce patches that are easier to install, and more trustworthy in the first place.
The part of a virus program that actually causes intended harm to a system (for example, deletion of files, display of messages, sending of e-mails etc), hence also called ‹Malicious Payload›. The payload is separate from other parts of the virus which may hide the code from anti virus software (see stealth) or may cause the virus to spread to other hosts. Many viruses do not have a payload per se, but are designed to demonstrate the infection mechanism rather than to cause damage (see Proof of Concept). Such viruses are still highly dangerous as the act of spreading the virus, if succesful enough, can cause system malfunction due to overloading.
It could be said that if the purpose of a virus is proof of concept concerning a mass replication method, then the replication is itself the payload. Under such circumstances the target is not the infected host, but the bandwidth of the Internet itself.
The successful violation of a protected system.
The Internet Society defines it as: Successful, repeatable, unauthorized access to a protected system resource. However, the fact that a vulnerability is closed, making that particular penetration no longer repeatable, does not to my mind make the earlier successful unauthorized access suddenly not a penetration.
Sometimes called ‹pen testing›, it is a security testing procedure that involves the legitimate attempt to penetrate a system’s security under the authority of the system’s owners. It’s purpose is to locate and eliminate any vulnerabilities that could be exploited by hackers.
Penetration testing can be used as a method of auditing a company’s own security, or undertaken as part of a security certification process (such as TCSEC or Common Criteria).
It is useful to employ penetration testing to check the implementation of a new firewall. Users should expect to receive detailed logs and reports following the test, and should receive help and advice on how to close any vulnerabilities located.
A term for the authorized interactions a subject (usually a user) can have with an object (usually a file or a directory); such as read, write, execute, add, edit, delete, etcetera.
When setting permissions, the security administrator should consider the principle of least privilege.
A personal firewall is a firewall designed for an individual PC – usually, but not necessarily, a home computer. Its purpose is to stop hackers gaining access to the computer from the Internet. A personal firewall should also be able to prevent installed Trojans or spyware from secretly ‹mailing home›.
Home computers are increasingly attractive targets as
- users start to store more and more of their personal and financial information on their PC
- broadband always-on Internet access (DSL) becomes more widespread.
‹Always-on› tends to use static IP addresses rather than the dynamic addresses used by dial-up access. It gives hackers more time to scan and penetrate home computers.
A personal firewall should be considered desirable for anybody accessing the Internet – and essential for anybody with always-on access.
There is also an argument for using personal firewalls in the enterprise. They provide additional desktop security within the perimeter. The requirement here is that although the firewall runs locally, it must be controlled centrally. Thus personal firewalls within the enterprise need to be integrated with a centralized policy enforcement system that can prevent the individual users from changing their firewall settings.
Personally Identifiable Information, PII
A new term that is being used to describe the type of electronic information that would constitute a breach of a person’s privacy. It is a difficult concept to define and a difficult concept to handle. Basically, storage of this information would almost inevitably be considered to be a moral breach of privacy, and depending upon your location, might well be considered a legal breach of privacy.
A generic term coined by PestPatrol to define the growing range of computer threats that cannot be described by any other single term. Pests include:
Phishing is fishing for personal information (such as passwords, credit card numbers and so on) over the Internet. It invariably involves social engineering in its methodology. Typically, the phisher would set up a website that could easily be mistaken for that of a legitimate financial institution, and would then send out emails with the intention of enticing people to submit personal information (eg, “we are updating our records; in order to ensure that your account remains functional, you must confirm your account password and credit card number”).
The address of the phony website is usually disguised. The standard for URL encoding is RFC 1738. This allows for the inclusion of authentication information before the domain name, and separated from the domain name by an ‹@› symbol. It also allows IP addresses to be coded in hex, octal, or as a 32-bit integer. The phisher will use this by presenting the authentication part as the legitimate institution’s web address followed by an ‹@› symbol followed by the phony website’s address coded in hex. The result, using a 32-bit integer, would be something like: http://www.barclloyds.com@2130706433.
The scam relies on the target recognising the ‹barclloyds› name as legitimate, and ignoring the rest of the address. However, clicking on the link will send the browser not to the legitimate address in front of the ‹@› sign, but the disguised address after the ‹@› sign. Here the phisher will have attempted to make the site look sufficiently genuine to entice the target into entering passwords, account numbers, credit card numbers and so on.
Phrack is most definitely a hacker’s magazine, and most definitely believes in and practices ‹full disclosure›. It can be reached at http://www.phrack.org.
Anybody seriously wishing to understand security should look at the Phrack articles. It is amoral rather than immoral.
A phreak is a ‹phone freak› – a hacker who concentrates his or her knowledge on telephone systems. The origins of phreaking possibly comes from university campuses from the ’50s onwards, where the cost of using the telephone system was a major burden. A phreak was a person who found weaknesses in the telephone system in order to gain cheap or free telephone usage.
Lower phone costs and the inability to separate telephones from computers and the Internet mean that the term is now little used and is in decline.
The most common definition of physical security is that it is the application of physical barriers and control procedures to protect information and information systems. Sometimes this is non-existent, while at other times it is excessive.
In 1994, the Joint Security Commission produced A Report to the Secretary of Defense and the Director of Central Intelligence entitled Redefining Security. On physical security it concluded:
“The Commission believes that a systems approach is necessary in making decisions about the application of security countermeasures. By placing all the responsibility for security on each of the security disciplines, we have created requirements for multiple layers of security that add little value. This is particularly apparent in physical security, where classified documents may be stored in locked containers inside locked strong rooms within secure buildings in fenced facilities patrolled by armed guards — overkill even at the height of the Cold War, much less in today’s security environment. A risk-managed systems approach would tailor countermeasures to threat and should result in significant savings that could be applied to improving personnel and information systems security, or to maintaining or improving other areas directly related to successful performance of defense and intelligence missions.”
Alec Muffett, however, uses the term in the sense of a ‹physical security hole› — “Where the potential problem is caused by giving unauthorised persons physical access to the machine, where this might allow them to perform things that they shouldn’t be able to do.”
A piggyback attack is an attempt to gain unauthorized access to a system via another user’s legitimate connection.
If the legitimate user’s connection is temporarily inactive, this is sometimes also called a ‹between-the-lines› attack.
If the attack attempts to seize control of the connection, it is sometimes known as a ‹hijack attack›.
Personal Identification Number — a numerical password usually of four or six digits. Most well-known for their use on cash machines in conjunction with an ATM or cash card. They are less secure than alphanumeric passwords, so are normally only used with some form or access token.
Since only numbers are used, there are significantly less possible combinations for the same length of PIN than a comparable length password. For example there are only 10,000 possible four digit PINs but 7,311,616 four digit passwords, assuming upper and lower case letters are allowed. This increased risk is normally mitigated by having a ‹three tries and you are out› policy on systems protected by a PIN.
Ping of Death
‹Ping of Death› is the name given to a Denial of Service exploit that was widely used in conjunction with the Ping utility. The exploit required the transmission of an illegal packet size; that is, a packet greater than 65536 bytes. This often led to a buffer overflow on the receiving system – with sometimes disastrous and often unpredictable results: system crashes, reboots, kernel dumps and so on.
This exploit was widely used because many different platforms were susceptible, and the attacker only needed to know the system’s IP address. Ping was the most common medium, but in reality the problem could be exploited by anything that sends an IP datagram – probably the most fundamental building block of the Internet.
Most platforms now have effective patches and fixes, and the exploit is no longer as dangerous as it was.
Piracy in the IT world used to mean solely ‹software piracy›, “the illegal copying and resale of software programs, be they operating systems, applications, or leisureware.” (Project Trawler: Crime On The Information Highways – NCIS, UK.)
It has long been big business and organized crime:
“The software pirate groups – called Warez groups – demonstrate a high degree of organization. Some of these have a board of directors; global, national and regional headquarters; and staff with specific roles (suppliers of legitimate software; crackers who will remove the copyright protection systems built into the software; and a large distribution network of couriers, runners, and holders who provide the storage space for the customers to obtain the illegal software). These groups can be extremely well-equipped too. In some cases, UK law enforcement operations have seized large amounts of computer equipment…
“Exports of pirated software have occurred from the Far East and East Europe to West Europe. In March 1998, the FBI claimed to a US Congressional hearing that piracy is “an international crime problem that involves organized groups that conduct their counterfeiting enterprises multinationally”. In the UK, ELSPA has reported that in 80% of its raids on software pirates, offenders are found to be engaged in other crimes (including illegal drugs, fraud and theft). The experience of UK law enforcement indicates that those involved in the counterfeiting and pirating of software are often involved in other more conventional criminal activities, such as drugs, forgery, handling stolen property, firearms possession. However, NCIS has yet to identify significant involvement of top UK criminals.”
With more powerful PCs and better compression technologies bringing music and film to the Internet, piracy now affects software, books, music and film. The scale of the problem has been a direct cause of the Digital Millennium Copyright Act (DMCA) in the USA.
The Public-key cryptography Standards (PKCS) are a set of standards for public-key cryptography, developed by RSA Laboratories in cooperation with an informal consortium, originally including Apple, Microsoft, DEC, Lotus, Sun and MIT. The PKCS have been cited by the OIW (OSI Implementers’ Workshop) as a method for implementation of OSI standards. The PKCS are designed for binary and ASCII data; PKCS are also compatible with the ITU-T X.509 standard (see Question 5.3.2). The published standards are PKCS #1, #3, #5, #7, #8, #9, #10 #11 and #12; PCKS #13 and #14 are currently being developed. — RSA Labs FAQ v 4
* PKCS #1 defines mechanisms for encrypting and signing data using RSA public-key cryptosystem.
* PKCS #3 defines a Diffie-Hellman key agreement protocol.
* PKCS #5 describes a method for encrypting a string with a secret key derived from a password.
* PKCS #6 is being phased out in favor of version 3 of X.509.
* PKCS #7 defines a general syntax for messages that include cryptographic enhancements such as digital signatures and encryption.
* PKCS #8 describes a format for private key information. This information includes a private key for some public key algorithm, and optionally a set of attributes.
* PKCS #9 defines selected attribute types for use in the other PKCS standards.
* PKCS #10 describes syntax for certification requests.
* PKCS #11 defines a technology-independent programming interface, called Cryptoki, for cryptographic devices such as smart cards and PCMCIA cards.
* PKCS #12 specifies a portable format for storing or transporting a user’s private keys, certificates, miscella-neous secrets, etc.
* PKCS #13 defines mechanisms for encrypting and signing data using Elliptic Curve Cryptography.
* PKCS #14 gives a standard for pseudo-random number generation
PKI stands for Public Key Infrastructure. It is the software used to manage and control the large scale use of public key cryptography.
Public key cryptography uses a special mathematical relationship between two numbers to provide a pair of encryption keys. One of these keys is used to encrypt a message that can only be decrypted by the other related key. One is kept secret by the owner while the other is made public.
Thus, if I wish to send you a confidential message I need only obtain your public key to encrypt the message. Once encrypted by your public key, it can only be decrypted by your secret private key. (In reality, for performance reasons, I would encrypt the message with a different type of cryptographic key – a symmetric key – and would then use public key cryptography to encrypt and send the key with the message; but this is not important to the theory.)
This type of public key cryptography is considered to be the cornerstone of electronic commerce. The reason is that, almost as a by-product, it provides additional unique features. For example, if I obtain a hash of the message, and encrypt that with my secret key, it follows that only my public key can decrypt it. You can then verify that the message did indeed come from me – I have, in fact, digitally signed the message and will not at a later date be able to deny having sent it (known as non-repudiation).
But consider this further. The proof is only that two keys are related – there is no proof as to the ‹owner› of the keys. For this we need an additional document – a digital certificate. The certificate is a document that confirms that a particular key pair belongs to a particular person or organisation.
This is where things get a bit difficult. For the concept to work, in theory everybody in the world should have at least one key pair, and a digital certificate to prove ownership of that key pair. There must then be a methodology for publishing the public keys so that they can be easily obtained; and, of course, a methodology for revoking lost, stolen or outdated key pairs.
This is what PKI is and does. It can generate public/private key pairs. It can issue certificates to validate ownership of those keys. It will usually provide an LDAP directory to publish the keys. And it will provide a CRL – or certificate revocation list.
PKIX is the name of the IETF working group that is specifying an architecture and set of protocols needed to support an X.509-based PKI for the Internet (it is a contraction of PKI X.509). It is also the collective name for the architecture itself.
The aim of PKIX is to facilitate the use of X.509 public-key certificates in multiple Internet applications and to promote interoperability between different implementations.
“The PKIX Working Group was established in the Fall of 1995 with the intent of developing Internet standards needed to support an X.509-based PKI. Several informational and standards track documents in support of the original goals of the WG have been approved by the IESG. The first of these standards, RFC 2459, profiles the X.509 version 3 certificates and version 2 CRLs for use in the Internet. The Certificate Management Protocol (CMP) (RFC 2510), the Online Certificate Status Protocol (OCSP) (RFC 2560), and the Certificate Management Request Format (CRMF) (RFC 2511) have been approved, as have profiles for the use of LDAP v2 for certificate and CRL storage (RFC 2587) and the use of FTP and HTTP for transport of PKI operations (RFC 2585). RFC 2527, an informational RFC on guidelines for certificate policies and practices also has been published, and the IESG has approved publication of an information RFC on use of KEA (RFC 2528) and is expected to do the same for ECDSA. Work continues on a second certificate management protocol, CMC, closely aligned with the PKCS publications and with the cryptographic message syntax (CMS) developed for S/MIME. A roadmap, providing a guide to the growing set of PKIX document, is also being developed as an informational RFC.” — http://www.ietf.org/html.charters/pkix-charter
Data that has not been encrypted, or ciphertext that has been unencrypted. In other words, plain text.
Point-to-Point Tunneling Protocol (PPTP)
An extension of the Internet’s Point-to-Point Protocol (PPP). It allows organizations to use private ‹tunnels› to extend their own network across the public Internet, thus developing a Virtual Private Network (VPN).
PPTP is an industry standard sponsored by Microsoft and other companies. Any user with PPP client support is able to use an ISP, who also supports PPTP tunneling, to connect securely to a server elsewhere in the user’s company.
It has been claimed that Microsoft’s implementation of PPTP is not sufficiently secure.
The security policy is the collection of rules that define an organization’s security objectives and how those objectives are to be achieved. It is sometimes said that the security policy should be divided into two parts: issue policy and functional policy.
The issue policy is required to specify the organization’s areas of concern, and its attitude towards them.
The functional policy defines the mechanics of how those concerns are to be satisfied. This requires hardware and software specifications and usage policies, and also staff behavioral policies.
The security policy must also be clearly and fully documented; and enforced. For example, the policy’s First Rule could be “Failure to comply with the Corporate Security Policy is a disciplinary offence”.
Most viruses are easy to detect by scanners because, no matter how many times a virus copies itself, each copy will look precisely the same. A polymorphic virus attempts to evade all but the most advanced scanners by changing itself each time it creates a new copy. It does this by using different machine code commands which accomplish the same thing, or by re-arranging the order of the commands.
A number assigned to a network service listening on a computer system.
Many networking protocols use port numbers (one for each direction in a conversation) as a means to support multiple independent communications streams between two or more systems. Within the TCP and UDP network protocols, for example, port numbers would allow two systems to have many different communications streams simultaneously by using a different port number for each.
Certain port numbers are registered and known as «well known ports» and it is these ports to which we connect to receive a standard service from a server. For example if we wish to connect to a web server we will normally address our request not only to the server’s IP address but also to port number 80 which is the well known port for HTTP, the protocol used to deliver the web page. In most applications the user is unaware of port numbers and these are either fixed standard ports or are negotiated by the software.
A software utility, used by hackers as well as system testers and software engineers, to determine if a particular TCP service is running on a particular host system. In a typical configuration the port scanner will scan through all of the «well known ports» (port numbers up to 1024) in the TCP protocol, in order to elicit a response from the server. The scanner works on the principle that if the port is open on the server then some form of response will be forthcoming. The method is used to ‹enumerate› or list the services running that may be targets for some form of exploitation.
Many firewalls and other security systems will watch for multiple rapid requests from a single host to connect to target ports and will report this suspicious behavior to the system administrator. For this reason a second generation of port scanners known as ‹Stealth Scanners› was created. Stealth scanners will attempt to disguise the scan either by conducting it very slowly over a long period of time, or perhaps sending some request other than a connection request in order to confuse the target.
The act of sending queries to Internet servers (hosts) in order to obtain information about their services and their level of security (see port scanner). On Internet hosts, there are standard port numbers for each type of service. Port scanning is sometimes performed by hackers and crackers to find out if a network can be compromised.
PPP (Point to Point Protocol)
A protocol that allows a computer to use the TCP/IP protocols and be directly connected to the Internet using a standard voice telephone line and a high-speed modem.
British Telecom’s dial-up viewdata service, used heavily by the travel trade. The first two people in the UK to be arrested for hacking were charged with unauthorised use of the Prestel computers.
Pretty Good Privacy (PGP)
PGP stands for Pretty Good Privacy. And Pretty Good Privacy stands for all that is fine and excellent about the Internet.
It is an e-mail encryption program developed by Phil Zimmermann. It uses strong cryptography. It uses symmetric crypto for the text (IDEA) and asymmetric crypto (RSA) for key transmission. Trust in the ownership of the keys is based on the ‹web of trust‹ principle rather than the hierarchical approach taken by PKI‹s X.509 certificates and the Certificate Authorities.
It is the most used e-mail encryption program in the world.
But PGP is important in what it stands for as well as what it is. And for this we need to understand some of its history.
It was developed by Phil Zimmermann in 1990. But in the months and years immediately following the Gulf War (1990-91) Zimmermann felt there was a distinct possibility that the US Government would seek to outlaw the use of encryption altogether.
In a very bold move, the details of which are confused, PGP was released to the Internet. It was free and it was open. And it rapidly spread around the world.
This upset the US authorities. Encryption is something that no government likes, and almost all governments have tried to control, are trying to control, or have succeeded in controlling. At the time, strong cryptography was classified by the US Government as ‹munitions›, and as such its export was strictly controlled. Zimmermann, however, had ‹exported› strong encryption around the world. Zimmermann became a target.
The original legal action against him was brought by RSA for breach of copyright. Nobody doubts, however, that the authorities were behind everything. But Zimmermann was ultimately protected by the US Constitution’s Freedom of Speech. (As an aside, and to show how daft the US laws had become, MIT produced a book of the PGP code using OCR-friendly type — and was legally allowed to export it.)
It is important here to consider the motives of the authorities (we’re actually talking about the US authorities, but this will apply very largely to all government agencies anywhere in the world). Strong cryptography means that LEAs (Law Enforcement Agencies) cannot understand what they are intercepting. Therefore, the twin aims of LEAs are to make interception easier, and to ban strong encryption, or gain the ability to decrypt strong encryption.
Now, if anyone suspects that this is paranoia, go away and read ‹Privacy on the Line›. LEAs will not rest until they gain control over encryption. And they will use all means to secure this. The legal aftermath of the September 11th atrocities against the World Trade Center is a greater attack on Western freedoms than the terrorists achieved themselves. Most of this is done in the name of countering terrorism — but the FBI itself has repeatedly stated that there is no evidence that these terrorists used any encryption; and let there be no mistake that the LEAs will not restrict these new powers to terrorist suspects.
The fact that PGP is ‹out there› will make it a lot harder for the LEAs to gain control. Says Zimmermann:
“Throughout the 1990s, I figured that if we want to resist this unsettling trend in the government to outlaw cryptography, one measure we can apply is to use cryptography as much as we can now while it’s still legal. When use of strong cryptography becomes popular, it’s harder for the government to criminalize it. Therefore, using PGP is good for preserving democracy. If privacy is outlawed, only outlaws will have privacy.
“It appears that the deployment of PGP must have worked, along with years of steady public outcry and industry pressure to relax the export controls. In the closing months of 1999, the Clinton administration announced a radical shift in export policy for crypto technology. They essentially threw out the whole export control regime. Now, we are finally able to export strong cryptography, with no upper limits on strength. It has been a long struggle, but we have finally won, at least on the export control front in the US. Now we must continue our efforts to deploy strong crypto, to blunt the effects of increasing surveillance efforts on the Internet by various governments. And we still need to entrench our right to use it domestically over the objections of the FBI.
“PGP empowers people to take their privacy into their own hands. There has been a growing social need for it. That’s why I wrote it.”
For the record we must add one short postscript. Towards the end of the 1990s, Zimmermann sold PGP to Network Associates, and joined the company himself. Network Associates, at the time of writing, owns the PGP trademark.
Many people were afraid that he had sold out at the same time. But in 2001 Zimmermann left Network Associates in a manner that demonstrated he was not happy with the company’s intentions for his product. He allied himself with Hush Communications and became a founding member of the OpenPGP Alliance (http://www.openpgp.org).
OpenPGP is the most widely used email encryption standard in the world. It is defined by the OpenPGP Working Group of the Internet Engineering Task Force (IETF) standard RFC 2440. The OpenPGP standard was originally derived from PGP (Pretty Good Privacy), first created by Phil Zimmermann in 1991.
The OpenPGP Alliance is a growing group of companies and other organizations that are implementers of the OpenPGP standard. The Alliance works to facilitate technical interoperability and marketing synergy between OpenPGP implementations.
PGP is back where it belongs. It and Phil Zimmermann have probably done more for the cause of personal freedom than anything else in the last ten years.
Also known as Prime Numbers.
Numbers which are divisible only by themselves and by one. So the first prime numbers in series are 1,3,5,7,11,13,17 etc. Prime numbers are a vital part of the widely used RSA Public Key Encryption algorithm. A user’s public key consists mainly of the product of two large prime numbers. In theory an attacker could factor this large product to reveal all possible factors and then search those for ones which are prime in order to reveal the users private key. In practice the security of the algorithm is dependent on the fact that it is very hard or ‹computationally unfeasible› to factor large numbers in search of their prime factors.
It should be stated that the mathematical difficulty in factoring large numbers is an assumption. There is no publicly known easy method to achieve this, and there is no mathematical proof to say it is impossible.
Privacy is defined by Roger Clarke thus:
Privacy is the interest that individuals have in sustaining a ‹personal space›, free from interference by other people and organisations. (http://www.anu.edu.au/people/Roger.Clarke/DV/Intro.html#Priv)
‹Privacy› and ‹secrecy› are easily confused. Secrecy is something you might seek. Privacy is something you should have. You might, for example, wish to keep some communications secret. You should have the right to expect that the communications remain private.
However, neither is easily attainable in the modern world. Strong encryption can render data communications secret and private – but only depending on where you live in the world. Governments are increasingly taking the right to demand your encryption keys. And the same Governments are increasing electronic surveillance through systems such as Echelon, Carnivore and the UK’s RIP Act.
Strong encryption is ultimately the only hope for retaining either secrecy or privacy.
The EU Directive 95/46 is the European Union Directive that imposed controls on the protection of personal data stored on computer systems within the European Union (see the Data Protection Act within the UK).
One of the most difficult areas is the restriction placed on trans-national data flows, where personal data cannot be ‹exported› to a country with weaker controls on the privacy of personal data; that is, do not meet the EU’s ‹adequacy› standards. This was particularly difficult between Europe and the USA. The USA has a tradition of ‹freedom of information›, while Europe has a tradition of ‹restriction of information›. See Safe Harbor for further details.
It is worth comparing the different approaches taken by the USA and Europe. Europe’s approach is one of blanket legislation; that is, one size fits all. The US approach has been described as “a sectoral approach that relies on a mix of legislation, regulation, and self regulation.”
In Europe, if you store personal data on your PDA (such as a list of contacts that might include their home phone number), then you need to consider the implications of the Privacy Directive. As a result, because of silly details like this, the Directive is frequently ignored.
The USA takes the view that some ‹consumers› need greater protection than others, and have legislated for particular groups rather than everyone. Thus, COPPA protects the privacy of children, GLBA protects the privacy of financial customers, and HIPAA protects the privacy of medical patients.
Private Key / Secret Key
A private key is a secret key; that is, one that is not disclosed to anybody who is not authorised to know it. The two terms could, sematically, thus be used interchangeably.
They are, however, usually used to describe the hidden key in two separate crypto methodologies: the ‹private key› describes the hidden key within asymmetric cryptography, and the ‹secret key› describes the hidden key within symmetric cryptography.
Within asymmetric cryptography, the private key is the undisclosed key in the matched key pair (viz, the private key and the public key). In asymmetric cryptography, the private key is known only to its owner. It is the key used to decrypt messages or files that have been encrypted with the corresponding public key. It can also be used to create a digital signature by encrypting a hash of the encrypted message.
In symmetric cryptography, a secret key is the key known only to the sender and authorized recipients. Used in this sense, the term ‹secret key cryptography› is actually synonymous with symmetric cryptography.
An attempt to gather information about an information system for the apparent purpose of circumventing its security controls.
The majority of probes detected by personal firewall systems are most likely caused by script kiddies using automated tools to scan vast areas of the Internet seeking specific vulnerabilities. A probe does not indicate a successful intrusion, but could indicate the existence of an attacker who may be more persistent than the average script kiddie.
Promiscuous Mode (Packet sniffing)
A networked computer that is ‹listening› to all the traffic on the network rather than just the traffic intended for it, is said to be in ‹promiscuous mode›. The act itself is often called ‹packet sniffing›.
Hackers will attempt to compromise a weakly secured computer on a network in order to install a packet sniffer. This is a tool that puts the computer into promiscuous mode and then monitors and can record all the traffic on the network. From the information gathered from the computer in promiscuous mode, the hacker will then seek to compromise the rest of the network.
Note that packet sniffers also have a valuable and legitimate role computer security.
One of the problems for the security manager is that it is not always easy to tell if a remote computer has been put into promiscuous mode.
Proof of Concept
Proof of concept is a term often used to describe a type of virus. It is a virus that does not contain a destructive payload. Its purpose is not to cause damage or disruption, but rather to demonstrate the effectiveness of a new technology or approach.
To understand this, you really need to consider the mind and motivation of the virus writer. Some are like real-world graffiti writers – they simply do it because they’re bored and/or they like to cause minor mischief.
Others may have a political motive, attempting to make a specific point about something. Others are just plain nasty people.
But throughout all this, there is one common motive – the desire for kudos among their peers. Now, if you take the hacker mentality (as opposed to the cracker – that is, a deep knowledge of and interest in how things really work and their limits), add to this a desire for kudos in the eyes of other hackers, but remove from it any intention to cause direct harm, then you have a prime candidate for authoring ‹proof of concept› viruses. These are designed to do no more than prove what can be done, and to shout “I did it first!”
Is there any harm in this? Well, yes. If the concept is to prove a new mass mailing methodology, then the proof alone will cause DoS problems on a wide scale. And proof of any concept inevitably allows subsequent copycats with no scruples to develop another virus along similar lines with a really destructive payload.
Protection Profile (PP)
In the context of the Common Criteria (CC), a protection profile defines an implementation-independent set of security requirements and objectives for a category of products or systems which meet similar consumers› needs for IT security.
A PP is intended to be reusable and to define requirements which are known to be useful and effective in meeting the identified objectives. In this sense a PP is a form of template designed to maintain a common standard within a particular category.
There is no central PP registry – instead a system of linked national registries is to be implemented. PPs may be defined by developers when formulating security specifications for TOEs, or by user communities. Early examples of PPs found in CC (Version 1.0) Part 4 include:
* Commercial Security 1 (CS1) consisting of security requirements and evaluation interpretations for products providing basic controlled access protection.
* Commercial Security 3 (CS3) specifying requirements for multi- user operating systems, database management environments, and calls for access controls based on individual user roles with respect to data objects and permitted operations.
* Packet Filter Firewall (PFFW) specifying security functions and assurances applicable to most packet filter firewalls.
A set of conventions that govern the interaction of processes, devices, and other components within a system. If components manufactured by different vendors use the same protocol, they should be able to communicate with each other.
A computer process that relays a protocol between client and server systems by appearing to be the client to the server, and appearing to be the server to the client.
Proxies are often used within firewalls to prevent a direct connection from outside of the firewall to a protected system inside the firewall.
Pseudo Random Number Generator
Cryptographic systems rely very heavily on random numbers. Most cryptographic software consequently includes a random number generating function. But a fixed set of computer instructions can only ever produce a fixed set of numbers — and although there may tricks to randomize these numbers as far as possible, ultimately they remain predictable to one degree or another. In effect, they are not random numbers, but pseudo random numbers. And the random number generator is a pseudo random number generator.
RSA Security defines the difference between a random number and a pseudo random number as that “pseudorandom numbers are necessarily periodic whereas truly random numbers are not”.
Getting a more apparently random number from a pseudo random number generator requires a seed.
Pseudo Random Number/PRN
A number which appears to be truly random but is not due to the means used to generate the number. Random numbers are widely used within cryptographic application to generate keys which cannot be guessed by an attacker. A cryptographic algorithm is said to be ‹perfect› if the only way to reveal the encrypted message is to try every possible key in turn (brute force) until the correct result is discovered. Even a perfect algorithm can be broken if the key or keys used can be guessed or calculated by an attacker. It has been found by cryptanalysts that many cryptographic applications use a PRN generated by a computer as their key and that, by knowing facts about the way these numbers are generated, they can predict or significantly narrow the number of possible keys.
It is widely believed that the only way to generate a truly random number is to sample some natural phenomena. This has lead to the widespread practice of gathering data from the users movement of the computer mouse to generate a true random number.
In some high security application truly random sources such as ‹white noise› or the rate of decay of a nuclear source are used to generate secure keys. Ironically such systems will typically test their own output for randomness and will reject obvious sequences such as all one’s or all zero’s (in binary). The act of rejecting these truly random strings does, of course, mean that the output is no longer truly random.
The key in a matched key pair (that is, the private key and the public key) that may be published; for example, posted in a directory for public key cryptography.
Public Key Cryptography
Public Key Cryptography
Public Law 100-235
Also known as the Computer Security Act of 1987. This US law creates a means for establishing the minimum acceptable security practices for improving the security and privacy of sensitive information in federal computer systems. The law assigns to the National Institute of Standards and Technology responsibility for developing standards and guidelines for federal computer systems processing unclassified data. The law also requires establishment of security plans by all operators of federal computer systems that contain sensitive information.
PWL – Password List File
When you log onto a network under Windows, you will usually see a dialog asking for your username and password. This dialog often also includes a check box offering to save your password for next time, so you do not need to enter it next time you log on.
You are advised not to do this, because it means that anyone who has access to your computer (even if it is switched off) will be able to access the network as if they were you. Windows already knows the password that it would normally ask you to supply when you restart the computer, so they will not need to enter it.
Also, for Windows to know your password, it must be stored somewhere on your computer. It cannot be encrypted, because then Windows would need to ask you to enter a password to access your password, which would defeat the object of the exercise. In fact, your password is stored in a PWL file using a scrambling algorithm that makes the actual password invisible to a casual observer.
But because Windows is able to unscramble the PWL file by itself, without any help from you, it follows that an attacker with a copy of your PWL file could unscramble the file, too. This is another reason why PWL files should be regarded as insecure, and you should avoid asking Windows to “remember” your passwords. It is well worth the inconvenience of being asked for your password each time you log on.
from Sophos› V-Files
Classic cryptography employs various mathematical techniques to prevent eavesdroppers from learning the contents of encrypted messages. Its strength is based upon assumptions about the strength of the mathematical algorithm being used (such as the assumed difficulty in factoring very large numbers in order to obtain any prime numbers). As such, the strength of the security is an assumption and not a guarantee.
In quantum cryptography the information is protected by the laws of physics. Heisenberg’s Uncertainty Principle and quantum entanglement can be exploited in a system of secure communication, often referred to as ‹quantum cryptography›. Quantum cryptography provides the means for two parties to exchange an enciphering key over a private channel. The amount of information that can be transmitted is not very large, but it is provably very secure; and it is potentially an excellent replacement for the Diffie-Hellman key exchange algorithm.
The system includes a transmitter and a receiver. A sender may use the transmitter to send photons in one of four polarisations: 0, 45, 90, or 135 degrees. A recipient at the other end uses the receiver to measure the polarisation. According to the laws of quantum mechanics, the receiver can distinguish between rectilinear polarisations (0 and 90), or it can quickly be reconfigured to discriminate between diagonal polarisations (45 and 135); it can never, however, distinguish both types.
The key distribution requires several steps. The sender sends photons with one of the four polarisations which are chosen at random. For each incoming photon, the receiver chooses at random the type of measurement: either the rectilinear type or the diagonal type. The receiver records the results of the measurements but keeps them secret. Subsequently the receiver publicly announces the type of measurement (but not the results) and the sender tells the receiver which measurements were of the correct type. The two parties (the sender and the receiver) keep all cases in which the receiver measurements were of the correct type. These cases are then translated into bits (1s and 0s) and thereby become the key.
An eavesdropper is bound to introduce errors to this transmission because he/she does not know in advance the type of polarisation of each photon and quantum mechanics does not allow him/her to acquire sharp values of two non-commuting observables (here rectilinear and diagonal polarisations). The two legitimate users of the quantum channel test for eavesdropping by revealing a random subset of the key bits and checking (in public) the error rate. Although they cannot prevent eavesdropping, they will never be fooled by an eavesdropper because, however subtle and sophisticated, any effort to tap the channel will be detected. Whenever they are not happy with the security of the channel they can set up the key distribution again.
This is a term used by anti-virus programs to describe the holding area to which suspicious or infected files are moved so that they are unavailable to the user, but not lost for ever. This allows security-conscious organisations to remove infected files from general circulation without deleting them permanently. An administrator can then attempt to disinfect them under controlled conditions, and make an informed decision about whether to return cleaned files to their original owner.
One benefit of allowing the administrator to decide whether files should be returned is that many macro viruses make deliberate and malicious changes to documents or spreadsheets they infect. This means that even after cleaning, files may contain damage, possibly subtle, which affects their validity or usefulness.
The virus XM/Compat, for example, which infects spreadsheets (see XLS), makes small and gradual changes to the contents of numeric cells in the file. Each time a file is opened, about 1% of numbers are adjusted up or down by up to 5%. In some environments, policy might deem it unacceptable to continue using files that have been damaged in this way: with quarantine, the administrator can act to enforce this policy.
One disadvantage of quarantine is that it can place a heavy burden on the administrator, particularly on a large server supporting many users. In the event of a significant virus outbreak, the administrator may end up with hundreds, even thousands, of documents to process. Furthermore, administrators may end up being called upon to vet the contents of documents that they would not ordinarily be authorized to read, which may contravene company policy.
from Sophos› V-Files
The term is now also used when e-mails or attachments are blocked by content security software, and moved to a quarantine area where they can safely be inspected by an adminstrator who decides whether or not to allow it through to the end-user.
r commands (Unix)
The ‹r› commands is a series of commands found in most Unix variants. The ‹r› stands for remote, such as rlogin, rcopy and rexec. They were a blessing for early Unix systems but are a liability in the Internet Age. They are also, perhaps, the origins today’s of single sign-on.
The story starts with the genesis of the Internet itself. In the ’70s, graduate students at University of California Berkeley were awarded a DoD contract to develop a wide-area networking technology that could unify hundreds of computers throughout the U.S. Defense Department. This is the work that evolved into TCP/IP.
Among the early developments was Telnet. Although in many ways obsolete today, this was a particularly sophisticated and complex application in its time. It was designed to fulfil a fundamental role of networked computers: to be able to run applications remotely. While HTTP and FTP allow you to request specific files from remote computers, Telnet lets you log into the remote computer. This, of course, requires a degree of security, so Telnet includes username and password authentication.
But at that time the application was still evolving, and it required the user to enter a username and password for each remote computer to be accessed. Many of the graduates found this frustrating. One went so far as to write a set of routines (like the UK’s income tax, purely on a temporary basis to solve a specific problem) that could be used until Telnet solidified. He called the routines rlogin, rcopy and rexec, where the “r” stands for “remote”. They were a great success with colleagues, and soon became widely used within Berkeley.
The routine that concerns us here is rlogin. It allows the user to log into the remote computer without any further authentication. The reason is simple: it was designed for use within not just the closed computer community of a university campus, but specifically the computer labs. In order to be able to use rlogin, you would first need to login to your local computer. This is where any necessary security resided. Once logged in, it could be assumed that you were an authorized user and could therefore access any other connected computers. The rlogin protocol uses trusted hosts and privileged ports for its user authentication. If you are logging in from a computer which the target computer “trusts”, it may let you log in without any additional validation. In other words, rlogin provides a simple single sign-on facility.
It was an immediate success; first within UCB, and then also within the DoD. In fact, it was so well liked that it escaped from its immediate and insulated environment. Manufacturers began to include rlogin within their own operating systems, and it has become an integral part of the majority of Unix variants. However, the more recent combination of widely available rlogin together with the universality of connected hosts on the Internet has turned a useful facility into a dangerous liability (but one that remains useful nevertheless). We can illustrate this by considering a typical rlogin configuration.
It works by maintaining and using a list of trusted hosts. This is usually a text file stored online and protected so that only system administrators can change it. Hosts whose names are found in this list are considered to be “trusted”, and users connecting from these hosts will not be asked to prove their identities. So, the simplest way to provide single sign-on within a network of hosts would be to include a trusted host list containing the names of all other hosts on each host. Put simply, no matter which host you start from, once correctly authenticated you will be able to access any other host.
If we wanted to refine this so that only specified users could hop from one host to another, we could set up a designated host as our single sign-on server. Its list of trusted hosts, however, should be empty. Its purpose is not to trust other hosts, but to be trusted by them. All other hosts should have trusted lists containing the name of this single sign-on server host; and the server should have separate authentication procedures.
With this procedure, all users would be able to log on to their own primary host. They would also be able to log on, in the traditional manner, to any other host for which they had authorization. But only some specifically authorized users would be able to log onto the single sign-on server. This host, remember, is trusted by all the other hosts in the network. So once a user has been able to log onto the server, he or she will be able to hop about all of the other hosts without ever being asked to re-key their password.
A collection of books published by the U.S. Department of defense. Each book had a different cover color, and the collection was consequently nicknamed ‹the rainbow series›.
The most famous of these books is the so-called ‹Orange Book‹ (US DOD 5200.28-STD ), but there are other books in the series that would be interesting to IT security processionals (specifically, the US DOD 5200.8-r — Physical security program — and the US DOD 5200.28-m — Security requirements for Automatic Data Processing Systems).
An encryption algorithm considered to be among the most secure available. Used by, and only available to, official UK government departments and agencies. Rarely made available to private companies.
A random number is a number that is completely arbitrary in nature other than its length. Random numbers are widely used in cryptography as keys. The point of using a random number is that it should be impossible to guess, calculate or even narrow the field of possible outcomes; but see also Pseudo Random Number.
Random Number Generator
A hardware or software based system which produces Random or Pseudo Random Numbers. True random number generators must work by sampling some truly random source such as the rate of decay of a nuclear radiation source. Generators which do not do this are Pseudo Random Number Generators and rely on the belief that it will be extremely difficult to predict what number the computer will come up with next.
Two secret key encryption algorithms developed by Ronald Rivest and owned by RSA Security. RC2 is a variable key length 64-bit block cipher, while RC4 is a variable key length stream cipher.
The key lengths can be anything between 1 and 2048 bits. 40-bit keys, which are really too small to be useful, are still widely used around the world because of earlier US export restrictions.
The RC4 algorithm was anonymously published in a Usenet posting in 1994; and the RC2 algorithm was similarly published in 1996. The algorithms are considered to be strong and can be about ten times faster than DES when implemented in software.
A basic computer operation that results only in the flow of information from an object to a subject.
Permission to read information.
Registration Authority (RA)
In PKI, the Registration Authority is the authority charged with recording or verifying some or all of the data required for the Certificate Authority to issue certificates and maintain CRLs. In many cases the CA will undertake all of the RA functions itself; but where a CA operates over a wide geographical area it may be administratively advisable to delegate some of the tasks to an RA. The delegated tasks might include personal authentication, token distribution, key generation, revocation and others — leaving the CA to concentrate on its primary tasks of signing certificates and CRLs.
The Registry is a crucial element of Windows. Loss of the Registry will mean that few applications on the hard disk can be used without re-installing them.
The Windows Registry is a set of data files used to store the settings that Windows uses to control hardware, software, and Windows itself. These files are special files that are treated as a «database». A database is a collection of data stored in files with a specific structure to them.
The Registry data files cannot be read by simply opening them in a text editor such as Notepad or Microsoft Word. These special files must be read from, and written to, using a special program written specifically for that purpose. Windows includes such a utility in all versions that make use of the Registry. This utility is called regedit.exe.
The Windows Registry is organized in a tree-like folder structure, similar to what you see in Windows Explorer when looking for a file on your computer. The top level «folders» of the Registry are called «hives». The most important hives to know if you need to edit your Registry are
HKEY_LOCAL_MACHINE and HKEY_CURRENT_USER.
Subfolders of these top-level «hives» are called «keys». Subfolders of keys are called «sub-keys». And finally, the actual data stored within the keys, such as specific settings, are called «values».
You should always use extreme caution when editing your Registry as there is no «save» function. All the changes you make are immediately effective as soon as you type them; so if you make a mistake in the wrong place, you could cause your computer to become unstable, or worse, prevent it from even starting up. That said, in most cases, if you carefully follow instructions, you CAN safely edit the Registry.
Spyware, adware, Trojans and other pests will normally seek to start up automatically each time you start your computer. Windows uses a special Registry key to store the settings for the programs to start automatically on booting up. Using the Registry editor (regedit.exe), you can navigate to this key using the following path:
Each value in this key is a different program that will run at start-up. Often when trying to remove various forms of spyware or adware, Trojans, etc., you will need to locate the offending program in this key and delete it.
To back up the Registry, back up user.dat and system.dat, which are hidden files in the windows directory. These 2 files comprise the Registry. Alternatively, use the ‹Export Registry File› menu option within regedit itself.
A system’s capability of being administered (operated) via a connection from a remote external terminal.
This is a legitimate requirement, and there are many legitimate software applications that can provide this facility. However, it s also a function of many Trojan Horses (such as Back Orifice and Sub7). In some cases, the developers of such software attempt to reclassify their products as commercial ‹remote administration› tools. This is normally done by ‹selling› the tool openly via a website. The purpose is almost certainly to make it more difficult for anti-virus producers to automatically remove them as malicious Trojans, since removal of a legitimate remote administration tool would have legal implications. (The better AV products are likely to detect the product and ask the user if he or she would like it removed.)
As a rule of thumb, we would suggest that any remote administration tool that openly declares itself is probably legitimate, while those that disguise themselves and hide from the user should be classified as Trojans.
Ironically, many security administrators consider the feature-rich Back Orifice Trojan to be the best remote administration tool available.
Remote Administration Tool (RAT)
A remote administration tool does what it says on the can: it allows a workstation to be administered remotely, usually over the Internet.
(RATs are sometimes called ‹remote administration trojans‹. This is a misnomer, although a Trojan is often the delivery mechanism for a RAT. In this scenario, the RAT is more akin to the Greek soldiary hiding inside the Trojan Horse than to the Horse itself. RATs are also sometimes described as Backdoor Servers. However, RAT remains the best name and description.)
A RAT comprises two elements: the server (installed on the “target’s” workstation; and a client used by the remote administrator.
There are perfectly valid reasons for the use of a RAT provided that the remote administrator has the right to install the server (for example, owns the PC), and that the user knows that it is there. However, if the server is installed by trickery (such as via a Trojan Horse), or secretly, then the RAT is malware rather than software.
Request for Comments/RFC
RFCs (Requests for Comments) are a series of technical and organizational notes about the Internet and computer networks. They are not ‹standards›; but are effectively treated as such.
RFCs are published by the RFC Editor, which is today a small group funded by the Internet Society. More details, including a list of the RFCs, can be found at http://www.rfc-editor.org/.
The part of risk remaining after security measures have been implemented.
One of the levels of marking used within the UK Government, and other governments›, Protective Marking Schemes. Also commonly referred to as levels of classification. Differing levels of classification require differing countermeasures to protect them and differing levels of clearance to access them.
A symmetrical encryption algorithm invented by Joan Daemen and Vincent Rijmen. After a three year competition organized by the American National Institute of Science and Technology (NIST), Rijndael has been selected to be the US Government’s Advanced Encryption Standard (AES). AES will gradually replace the Data Encryption Standard (DES) in US Government use and will be recommended to the banking and business communities.
RIPA (Regulation of Investigatory Powers Act – UK)
The UK Government Regulation of Investigatory Powers Act. Following the removal of a number of elements due to heavy lobbying by UK industry the controversial RIP Bill became law in 2000. The full text of the act can be found at www.hmso.gov.uk.
The Act provides a legal framework for the covert or overt monitoring of communications including telephone, fax and email by authorized persons. A related statutory instrument called the ‹Lawful Business Practices Regulations› provides a framework under which employers may be allowed to monitor the communications of their employees taking place over networks owned or controlled by the employer.
The likelihood that a given vulnerability will be exploited by a particular threat.
Determining what you need to protect, what you need to protect it from, and how you should protect it. Ii is the process of examining all of your risks, and ranking those risks by level of severity.
The total process of identifying, controlling, and eliminating or minimizing uncertain events that may affect system resources. It includes risk analysis, cost-risk analysis, selection, implementation and test, security evaluation of safeguards, and overall security review.
Role Based Access Control (RBAC)
Role Based Access Control (RBAC) is a security environment in which users› rights to access or change information are controlled by the role or roles they fulfil within the organization; that is ‹what› they are, rather than ‹who› they are.
Such systems are becoming increasingly popular, since they promise a system that should require less exceptions to be managed, with a consequent saving in management effort and costs.
Root is the name given to the ‹superuser› account on a Unix system. The account ignores permission bits, so anybody using this account has complete freedom within the system. Gaining root is thus the primary aim for anybody attacking a Unix system.
“Whoever has root sets the permissions; whoever sets the permissions has control of the entire system. If you have compromised root, you have seized control of the box (and maybe the entire network).”
Maximum Security: A Hacker’s Guide to Protecting Your Internet Site and Network.
Rootkit is a cracker tool that captures passwords and message traffic to and from a computer.
It is software designed to replace specific components of an operating system, so that once installed it creates back doors in the compromised system, allowing continuous system access to the cracker.
Usually, the installation of a Rootkit is the first step performed by a cracker after penetrating a system, allowing the cracker to re-access the system, even if the root password is changed, or if a system reconfiguration is performed.
Rootkit is a classic example of a Trojan Horse.
A hardware device connected to a host on a LAN that acts as a gateway between two different networks.
A public key encryption algorithm invented by Messrs Ronald Rivest, Adi Shamir and Leonard Adelman in 1978. Theoretically the key length can be unlimited, but typical sizes are 512, 768, 1024 and 2048 bits. 1024-bit keys are considered sufficient for both digital signatures and for key exchange, although 2048-bit keys are recommended for use where the data must be kept particularly secure for an extended time.
The security of RSA is based on the perceived difficulty in factoring large numbers. No easy method is known – although it cannot be mathematically proven that there is no easy method. If you take two very large prime numbers and multiply them together, you get a special number with only two factors. It is this unique relationship, together with the difficulty in factoring such numbers, that is used to provide the RSA system.
RSA was patented by RSA Security. The patent expired in September 2000.
«RSA is a public-key cryptosystem for both encryption and authentication; it was invented in 1977 by Ron Rivest, Adi Shamir, and Leonard Adleman. It works as follows: take two large primes, p and q, and find their product n = pq; n is called the modulus. Choose a number, e, less than n and relatively prime to (p-1)(q-1), which means that e and (p-1)(q-1) have no common factors except 1. Find another number d such that (ed – 1) is divisible by (p-1)(q-1). The values e and d are called the public and private exponents, respectively. The public key is the pair (n,e); the private key is (n,d). The factors p and q maybe kept with the private key, or destroyed.
«It is difficult (presumably) to obtain the private key d from the public key (n,e). If one could factor n into p and q, however, then one could obtain the private key d. Thus the security of RSA is related to the assumption that factoring is difficult.»
RTF – Rich Text Format File
This is an alternative format to the DOC file (see DOC) which is supported by Microsoft Word. Files can be saved, with most of their formatting information intact, and then loaded back into Word as RTFs instead of DOCs.
RTF files are actually made up of ASCII text, with formatting commands embedded in them. For example, a word that appears in boldface would be marked in an RTF file using the characters “”. RTF files cannot contain macros, so they cannot be infected with a macro virus.
This provides a useful way of communicating with people outside your company. By sending documents in RTF, you effectively do away with the possibility that you might transmit a virus by mistake. Your recipients will be able to read your file directly into Word, and even to convert it back to a DOC file if they wish. But if it is then found to be infected, you will know that the infection was introduced after it reached them.
Note that the process of converting from DOC to RTF is imperfect. Some formatting features that are possible in Word do not survive the journey from DOC to RTF and back. Before committing to using RTF for sending and receiving email attachments, you may need to experiment with the conversion of common company documents. This will soon reveal any potential layout problems. You may need to simplify some formatting tricks that you are accustomed to using, but you will almost always find there is a simpler way to achieve the same result.
There is an important caveat here — you cannot assume that a file really is in RTF simply because its has an RTF extension. There are some macro viruses which intercept the attempt to save a file as RTF and force it to be saved as a DOC file, but with an RTF extension. If someone sends you such a file via email, and you double-click it, Word will attempt to load the file. Since Word recognizes it as a DOC file, despite its name, it loads it as a DOC file and activates the virus.
Fortunately, it is easy to check for yourself that an RTF really is what it claims. Try looking at a DOC file and an RTF file using NOTEPAD. The RTF file will load as legible ASCII text, starting with “
Rubber hose cryptanalysis
Rules are user- or administrator-configurable parameters that define how an application should operate under a set of predetermined conditions. Rules enable different users to tailor the operation of an application to their own specific requirements.
Rules are used to determine, for example, what should and should not be allowed through a firewall, what is and is not suspicious behavior for an IDS, what is and is not inappropriate content in a content security system, and so on. Too strict a set of rules becomes intrusive; too lax a set of rules becomes insecure.
An electronic mail protocol which supports encryption. See MIME.
Because of differences in approaches to the enforcement of privacy in computerized personal data, the USA ‹fails› the European Union’s definition of ‹adequacy›. As a result, it would normally not be possible for the European office of a US company to ‹export› personal data (such as customer records) to a server in the United States. See the Privacy Directive for further information.
In order to overcome these differences, the US Department of Commerce and the European Commission developed a ‹safe harbor› framework (which was approved by the EU in July 2000). US companies that certify to the the safe harbor are assured of EU ‹adequacy› recognition, and are consequently safe from prosecution by European authorities under the European privacy laws.
The US Department of Commerce maintains a Safe Harbor website at http://www.export.gov/safeharbor/.
Salt is an arbitrary value, preferably a random number, that is added to a key or password, and is unique to each computer or cryptographic system. The effect is that the same key or password will generate different hash results on different systems, and it will make attacks against the password user-identification process more difficult.
Denis Howe defined ‹sandbox› (Free On-line Dictionary of Computing – FOLDOC, 1993) as a ‹Common term for the R&D department at many software and computer companies (where hackers in commercial environments are likely to be found). Half-derisive, but reflects the truth that research is a form of creative play.›
Today, ‹sandbox› is more likely to refer to the code technology that provides the basis of Java’s security. ‹Trusted› code is allowed full access to the system. ‹Untrusted› code is restricted to the sandbox, a protected and limited area of memory in which the code may ‹play› without causing any damage to the host system. In practice, this usually means that a Java application has full access to the system, while a Java applet (as downloaded from the Internet) is limited to the protected sandbox.
Security Administration Tool for Analyzing Networks. Written by Dan Farmer and Wietse Venema and released on 5 April, 1995. SATAN is, in many ways, the forerunner of today’s vulnerability scanner products. It probes systems looking for vulnerabilities. It works by telnetting to one port after another of the victim computer. It determines what program (daemon) is running on each port, and determines whether that daemon has a vulnerability that can be exploited. SATAN can be used by sysadmins to audit their own system security, or it may just as easily be used by a hacker to break into someone else’s computer.
Towards the end of 1996, Dan Farmer used SATAN to survey the security of 2200 of “the most interesting sites – banks and credit unions, some US federal computers, newspapers and some pure online Internet commerce systems.” Farmer found that “over 60% could be broken or destroyed (ie, all network functionality deleted or removed).” Furthermore, “no attempt was made to hide the survey, but only three sites out of more than 2000 contacted me to inquire what was going on when I performed the unauthorized survey – that’s a bit over one in one thousand questioning my activity.”
SATAN is not actively maintained.
Searching the the remnants and residue of an object, such as a Swap File, in order to obtain unauthorized information.
SCR – Screen Saver Files
These are special types of program file (see EXE). They cannot be executed directly, but can be automatically launched by Windows after a specified time of inactivity.
Because most Windows machines have numerous SCR files installed, and because the system executes them for you without awaiting an explicit instruction to do so, there are viruses which infect them. Popular screen savers are often exchanged via email or downloaded from the Web, often without the same level of caution that would be afforded to an EXE or COM file.
from Sophos› V-Files
A derogative term used to describe ‹wannabe hackers›. Given that the original meaning of ‹hacker‹ was not pejorative, but described a person with great systems knowledge and ingenuity, a script kiddie is someone who likes to break into other peoples computers, but does not have the personal expertise of the genuine hacker.
Instead, the script kiddie relies on scripts and tools written by other people. For example, there are probably hundreds of script kiddies probing computer systems with tools like SATAN at any time.
Script kiddies are probably more of an infernal nuisance than a serious threat. But it is a serious nuisance:
«We are creating hordes and hordes of script kiddies. They are like cockroaches. There are so many script kiddies attacking our networks that it’s hard to find the real serious attackers because of all the chaotic noise.»
Marcus Ranum, speaking at the Black Hat Security Conference, 2000.
An apocryphal test for content security software. Scunthorpe is a town. It is a very nice town. But there is a danger that poorly designed content security software could block any descriptions of Scunthorpe because it includes the rude four-letter word,’horp›.
The Scunthorpe Test summarizes the difficulty facing content security designers and content security users. There are many, many other words that pose similar problems, either because of words within them, or because they have dual meanings where at least one is non-controversial. Blue Tits and chicken breasts come to mind…
The term used to describe the level of protection which should be afforded to information which if disclosed to unauthorised persons, could cause potential damage or embarrassment etc. Ideally, items classified secret should only be available to those who need to know.
A ‹secret›, probably a key or a password, is shared when it is split into a number of components, each of which is held by another person or in another place. Use of the key or password may require some or all of the components to be recombined before it can be used.
Secure Sockets Layer – SSL
A protocol originally developed by Netscape. It is a web encryption technology, mostly used by e-commerce web pages throughout the world, to allow safe transactions without the risk of data eavesdropping by third parties. It was developed to enable the secure transmission of documents over the Internet. SSL creates a ‹secure› connection between the client and host, enabling confidentiality of information transmitted.
Strictly speaking, SSL has now been superseded by TLS – but most people still use the term SSL. It is a public key encryption scheme that uses both public and private keys to authenticate users and provide secure communication between the browser and a website. Part of this process involves the server (the website) sending the client (the browser) its certificate. The certificate is provided by a CA, and guarantees the server’s authenticity so that the user can be confident about what resource the browser is talking to. After this key exchange takes place, a shared secret key is used to encrypt the data between browser and website. The existence of an SSL communication is indicated by a padlock icon in the bottom right hand corner of the browser screen.
A term used to describe the understanding of security requirements and methods. All companies should operate security awareness programmes to teach staff about the need for information security, the threats to information security and the methods for maintaining information security. Security awareness and security awareness training should be built into the security policy which should be part of the contract of employment.
A security classification is an hierarchical sensitivity label applied to an object. It is used to determine which users may access what data (generally based upon their own hierarchical security clearance level).
In order for the owner of ‹Information› to implement proper security controls, he, she or the organization must first classify that information with one of a number of classifications (see also label). The different classifications will have different levels of security controls (see also access control, mandatory access control, discretionary access control).
In the private sector, Information is often classified into following (although in reality each organization can set its own classifications with its own levels of security for each classification):
Sensitive data, for example, is information that requires a higher level of security than public data. The owner should protect it from loss of confidentiality as well as integrity from unauthorized individuals.
Within Government, the classifications are normally:
* Sensitive but Unclassified
* Top Secret
For all we know, however, the precise descriptions and number of classifications could be Top Secret…
Assuming that a system’s objects (let us say, ‹files›) are all given an hierarchical label defining their sensitivity (Security Classification), a subject’s (let us say, user’s) security clearance is the corresponding label that defines the degree of sensitivity that can be accessed.
Clearance level labels could, and for administrative ease possibly should, be given the same names as classification level labels. Under such circumstances, a user with a clearance level up to ‹secret› would be able to access files with an classification level up to, but not higher than, ‹secret›.
The Security Clinic is a free service operated by ITsecurity.com. Visitors to the site can submit their security problems to a panel of more than 100 security experts from all round the world – and receive free and unbiased help, advice and opinions.
The Clinic is operated on an anonymous basis. The identity or e-mail address of those asking for help is never revealed.
Most problems and queries that are information security related are accepted; but use of the Clinic instead of an Internet Search Engine, for cheap market research, or to answer exam questions is deprecated.
In the UK, the Security Services are responsible for assessing the threat to the nation in respect of information warfare.
See also: Brian Gladman’s UK Government Information (In)Security Organisations
A specification of the security required of, and a functional specification for, a target of evaluation. The specification becomes the baseline by which the evaluation is considered.
Security Through Obscurity
Security through obscurity (obfuscation) is generally a derogatory term used when vendors seek to hide security details. Examples include:
* proprietary encryption algorithms where the vendor declines to release details with the argument that publishing the details makes it easier for hackers to find a weakness
* vendors declining to publish, and arguing that nobody else should publish, details of any discovered weaknesses
The basic principle is that if no-one knows any of the details of the security, then equally no-one knows any weaknesses. The problem with this argument is that you can never be certain that crackers haven’t already found the weaknesses – better by far to be as open as possible so that good guys can find and help solve those problems.
A seed is a truly random bit sequence that is added to a pseudo random number generator in order to generate a longer (and more ‹apparently random›) pseudo random number. The seed should be generated by a true random number generator.
Without the seed, a pesudo random number generator will produce predictable numbers.
In the USA, government data is generally described as being classified or unclassified. ‹Sensitive› is a label within ‹unclassified›.
Government-held sensitive but unclassified information is information whose loss, misuse, unauthorized access to, or modification of, could adversely affect the national interest or the conduct of Federal programs, or adversely affect the privacy to which individuals are entitled under the Privacy Act.
A Report to the Secretary of Defense and the Director of Central Intelligence
February 28, 1994
Joint Security Commission
Washington, D.C. 20505
Separation of duties/roles/responsibilities
The practice of separating functions or roles among different individuals, in order to keep a single individual from subverting a process. For example, those responsible for auditing a computer system should not be the same as those responsible for programming the computer system.
See also, Symmetric Cryptography.
When using symmetric encryption it is considered bad practice to always use the same key for each communication with another party. By creating a very large pool of ciphertext from the same key the cryptographer provides the attacker, or cryptanalyst, with a much easier task than cryptanalyzing small texts.
To alleviate this problem it is common practice to use a random Session Key to encrypt each message and then to encrypt this session key with the secret key shared by the communicating parties. The encrypted session key is then attached to the encrypted message and sent. In this way only a very small amount of ciphertext is provided to the attacker with every message.
This method also allows a message to be sent to multiple recipients without each recipient needing to share the same secret. Instead the message is encrypted with a session key and this session key is then separately encrypted with a differing shared secret for each recipient. All encrypted session keys are then attached to the message.
Secure Hash Algorithm, a NIST standard first issued in 1992. In 1995, NIST released an update called SHA-1 – which should clearly be used in preference to SHA. The update is believed to address a collision problem.
On most Unix systems, users› passwords are stored in a file to which every user has read/write access. It is relatively easy for a hacker to obtain a copy of this file and, by using one of a number of commonly available tools, to decode the encrypted user passwords stored within it.
More secure Unix versions use a system called Shadow Passwords. Here, the globally-readable password file contains various details about users, but not their actual passwords. These are held in a separate file which can be accessed only by a process running at root level. Such passwords can still be accessed (in encrypted form, of course) by hackers, but only by those which manage to gain root level access to the system.
Software distributed by the shareware method. To obtain the software you simply copy it from a friend or colleague, or download it from an online service. You then evaluate it (normally for a period of 30 days). If you then decide that you wish to use the software, you send a payment direct to the author. If you do not intend to use the program you simply delete it.
Many people refer to software as being either “shareware” or “commercial”. This gives the impression that software distributed by the shareware method is not commercial. However, it is, and using shareware beyond the evaluation period without sending the registration fee is no different from using a pirated copy of a conventional software package.
A utility program that enables the user to interact with the UNIX operating system. Commands entered by the user are passed by the shell to the operating system for execution. The results are then passed back by the shell and displayed on the user’s display.
There are several shells available.
A Unix-based account on a service provider’s computer.
A character at the start of the command line which indicates that the shell is ready to receive commands. The character is usually a ‹%› (percent sign) or a $ (dollar sign), but can vary between systems.
Shoulder surfing is the practice of looking over a user’s shoulder to observe what he or she is typing. For example, a password may not be displayed on the screen, but it can be discovered by looking over the user’s shoulders and observing which keys are pressed.
Shredding is a term used to describe the secure erasure of computer files. It is not as easy as it might sound. On most systems, for example, the OS ‹delete› command simply removes the file name from the system’s internal directories. It appears as if the file has gone, but it can still easily be recovered.
Shredding, or secure erasure, usually involves physically overwriting the content of the file. But it still isn’t easy. Commenting in ITsecurity.com Security Clinic, security expert Robert Schifreen explained the difficulties:
“However, you need to be aware that, even if data has been overwritten thousands of times, it CAN still be recovered by someone who has access to the right kit… The military standard, for any hard disk that has held top secret data, is to physically crush the hard disk and destroy it.
“The reason why overwritten data can be recovered is because magnetic flux, which is how data is stored, decays slowly over time. If you can read not just the data on the drive, but also record the strength of the flux, you can sort new data from the remnants of the old stuff underneath. It’s not the sort of thing that a freeware tool can do, but it’s certainly not beyond the capability of law enforcement/intelligence agencies and specialist data recovery companies.”
A silent alarm is an alarm sent to an administrator or supervisor without giving any indication that an alarm has been raised. A silent alarm could be triggered by access control or intrusion detection systems.
Simple Mail Transfer Protocol (SMTP)
A protocol that describes how information is passed between reporting devices and data collection programs. The standard Internet protocol for distributing e-mail.
One of the three main properties of the Bell LaPadula security model (the others being the *-property (star property) and the tranquility property).
The simple property states that a subject may only have read access to an object if the security level of the subject dominates that of the object.
When you think about it, it’s quite ‹simple›: a user (a subject) may only read a file (an object) if he or she has a security level equal to or greater than that of the file. It means that someone with a ‹secret› security level cannot read a file with a ‹top secret› security level; but can read a file with a ‹secret› or ‹confidential› security level. Simple, really.
The ability to login into multiple computers or servers with a single action and the entry of a single password.
Especially useful where, for example, a user on a LAN or WAN requires access to a number of different servers.
Although single sign-on makes the login process more convenient for the user, it does mean that the password becomes more valuable to a hacker because of the large number of systems it can access. For this reason some consultants discourage the use of single sign-on systems, and, where there is no other realistic option, recommend that passwords are guarded safely and changed regularly. Users must also be made fully aware of their responsibility for safeguarding their password.
Skipjack was originally an NSA classified 64-bit block cipher with a key size of 80 bits. At first it was only available as a hardware implementation within the Clipper chip. It was intended to be used in conjunction with a protocol called the Key Exchange Algorithm which would then allow law enforcement agencies to decrypt the data.
Skipjack was declassified in 1998 and can now be implemented in software without necessarily including KEA. Skipjack has not, however, proved to be very popular (probably because of its association with the NSA).
The algorithm itself would appear to be strong. However, it has been shown that, slightly modified, it is susceptible to a differential cryptanalysis attack. Since it has been suggested that the NSA has been aware of this attack since the beginning of DES, and certainly long before ‹differential cryptanalysis‹ was made public, then some cryptanalysts believe that the NSA have always had the ability to crack Skipjack.
Sklyarov is a Russian programmer working in Russia for ElcomSoft, a ten-year old company that develops software to recover computer passwords and provide computer security. ElcomSoft also helps identify, trace and shut down hackers, software pirates and cyber-thieves. Its software and consulting services are used in over 80 countries and by U.S law enforcement agencies from the FBI to county sheriffs and local police departments.
Sklyarov noticed that Adobe was using trivial ROT-13, a code that has been around since ancient Rome, to encode its e-books. He produced a program that deciphers this and ElcomSoft sold it via the Internet. He did nothing illegal in Russia, and frankly nothing that should be illegal anywhere.
In July 2001, Sklyarov from Russia traveled to the Land of the Free and delivered a talk on his findings at DefCon. He was then arrested by the FBI on the request of Adobe for contravening the DMCA.
“His software allows you to read your own copy of an e-book using a different program, computer, or operating system than the one you’ve registered it for. Sklyarov’s software is popular with blind people, who use it to feed e-books into speech synthesizers, and with readers who are afraid that their e-books will become unreadable after a computer upgrade or operating system change–a reasonable concern. Sklyarov remains in jail today, even though Adobe Systems Incorporated, which instigated the arrest, later regretted its own action and called for his release. In a New York Times editorial, Stanford law professor Lawrence Lessig asserts that Sklyarov hasn’t broken any law. It’s ironic that a Russian had to come to the U.S. to be arrested for what are essentially thought-crimes: allowing people access to books, and exercising his free-speech right by blowing the whistle on inferior products,” wrote Bruce Perens in ZDNet, 2 August 2001.
The space on a hard disk between the end of a file and the end of the cluster that the file occupies. For example, on a drive with a 16 kb cluster size, there will be 5 kb of slack space at the end of an 11 kb file. The slack space is wasted, as it cannot be used by another file – the only way to prevent so much space being wasted is to reduce the cluster size, either by reducing the size of the disk partition or by using an operating system that doesn’t suffer from the problem such as Windows NT or OS/2.
When a file is copied, its slack space is not copied. However, someone with the correct software tools can examine slack space so it is important that, if there’s a risk of slack space containing remnants of confidential files, you wipe it with a suitable software utility.
A plastic card normally the same size and shape of a traditional credit card, but including a microprocessor and a user interface. Information contained on the card or generated by the microprocessor can be used for different purposes. It requires a smart card reader to transfer the data from the card to a computer.
Smart cards are relatively secure. However, it should be noted that they are not secure against laboratory attacks.
Smart cards are defined by the ISO 7816 standards.
Smart Card Reader
A device used to read data contained in or generated by a smart card, and to pass that data to a computer system. It acts as a bridge between the card and computer.
There is no technical reason why ‹readers› cannot also be ‹writers›, with the computer updating or amending data held on the card.
A common technique of performing a DoS (Denial of Service) attack — but one that takes a different approach to the usual DoS routine. DoS would normally use default pings to overwhelm a given host. But to really overwhelm the resources of a given target, the attacker would have to compromise many computers, and have them ping the target simultaneously.
The difference that Smurf provides is the ability to launch a DoS against a target from only a single computer. Smurf works by sending a ping packet to the broadcast address of a network, using a spoofed source IP address. The source IP address is the actual target. When all the clients of the network respond to the ping packet, they will all be sending a reply to the single spoofed address. It is this multiple response that is the actual DoS attack, overwhelming the spoofed target system.
A term used to describe a product analagous to the type of ‹cure-all› snake oil products that used to be sold by traveling salesmen: all hype and no substance. It is used with particular relevance to crypto products that do not live up to the salesman’s claims.
However, it is difficult to specify a precise definition of what is and what isn’t ‹snake oil›. One attempt defines crypto snake oil as a set of algorithms that are unpublished and untested. The problem here is that being unpublished does not necessarily make it no good. Just probably no good (until somebody can be bothered to reverse engineer it, cryptanalyse it, and publish the results).
Another attempted definition describes crypto snake oil as encryption that doesn’t work. The main problem here is that sooner or later almost all encryption products will be broken. This implies that today’s security becomes tomorrow’s snake oil – which is blatently unfair to some very good products.
It is probably best to treat ‹snake oil› as an emotive term that can be used to describe an untrustworthy product that you have good reason to believe is untrustworthy. But be ready to justify your claim.
A program that monitors network traffic. Sniffers are used to capture data transmitted on a network. The act is called sniffing.
Like so many security applications, sniffers can be used to either enhance or weaken network security. Intrusion Detection Systems use sniffers to detect suspicious traffic; hackers use sniffers to obtain passwords.
Short for Simple Network Management Protocol, SNMP is a set of protocols for managing network equipment. It replaced the earlier Simple Gateway Monitoring Protocol (SGMP) in 1990, and is defined in RFC 1157. It is present in almost all networks and is the means by which the different elements communicate.
RFC 1157 defines the basic architecture thus: “Network management stations execute management applications which monitor and control network elements. Network elements are devices such as hosts, gateways, terminal servers, and the like, which have management agents responsible for performing the network management functions requested by the network management stations. The Simple Network Management Protocol (SNMP) is used to communicate management information between the network management stations and the agents in the network elements.”
In early 2002, however, the Finish Oulu University (more specifically the Oulu University Secure Programming Group) announced that its program of testing application and system software had shown that “Many of the implementations available for evaluation failed to perform in a robust manner under test.” In short, vast areas of the Internet and the majority of corporate WANs and LANs contained vulnerabilities through the omnipresence of SNMP. It demonstrates very clearly that the very advantages of SNMP — simplicity, venerability and ubiquity — are no proof of security.
Snort is a proper noun like the name of a person, it’s not SNORT or S.N.O.R.T. or anything like that. I came up with Snort because it was a sniffer only at the time but with some more features that were useful to me, so I thought ‹Snort is like sniffing, but more›.
Snort functions also a network sniffer and a logger that uses crafted rules which are matched against the packets as they are captured. If the rule matches then the user-defined action in the rule is executed.
Snort is designed to provide reliable performance, simple user, and flexibility to its environment. Internally, there are three main subsystems that comprise Snort:
* Packet Decoder
* Detection Engine
* Logging and Alerting System
It can be deployed on almost any host in a Windows or Unix network. Usually Snort uses an interface in promiscuous mode, and one per collision domain.
An attack that targets vulnerabilities in the human component of a system. A system is composed of hardware, software and human components.
A cracker can exploit any of these components to access the system successfully. Usually social engineering techniques are faster, easier to implement, and require less detailed system knowledge; and are therefore tried before attempting to exploit hardware and software vulnerabilities. An example of social engineering is when a cracker tries to con a password out of an authorized user while pretending to be part of the system’s maintenance crew.
Kevin Mitnick is (in)famous for his social engineering skills.
SORM, in Russia, is the technical means destined to provide for efficient research measures in the networks of the documental telecommunications (NDTC) being arranged as the basis of the Russian Federation legislation. It is meant to provide for technical support of the above research measures in the telecommunications networks which are used for supplying customers with telematic services, data transmission services, and access to the world global information network of the Internet.
It is, in effect, Russian legislation that gives the FSB (Federal Security Service, successor to the KGB) the ability to conduct Internet surveillance on all citizens without first gaining a court order. In Russian, the acronym stands for “System for Conduct of Investigations and Field Operations”. It is remarkably similar in name and intent to the UK’s Regulation of Investigatory Powers Act (RIPA) — and it is a salutary lesson in prejudice that while the former is frequently described as confirming the country as a ‹police state›, the latter carries no such epithet. It should.
Spam is the term given for electronic junk mail — indiscriminate unsolicited commercial e-mail. It is a growing problem, and no e-mail user is immune. It is generally accepted that the name originated from Monty Python, but it is not clear whether it comes from the menu sketch that offered spam with everything, or the Viking chorus that sang “spam, spam, spam” until it obliterated all other conversation. Both derivations fit. Incidentally, electronic spam should be written in lower case to avoid confusion with the SPAM™ of Hormel Foods Corporation.
But despite the humorous etymology, spam is a serious problem. It can lead to denial of service. It can cost thousands of dollars in wasted employee time. It can lead to the unwitting display of pornographic material on users› screens. And it causes an awful lot of heat under the collar.
One of the problems is that there is a tendency to omit the term ‹indiscriminate› from the definition of spam. As a result, many users now consider the receipt of all unsolicited commercial e-mail to be spam. Commercial e-mail is a fact of life, and we should get use to it – just as we get used to physical junk mail. What we don’t like we bin without opening it.
Overreaction just causes problems all round. Genuine spammers, those who fire off indiscriminate junk e-mail so that you receive half a dozen copies of the same mail in a single day, or different mails from the same source every day, almost invariably cover their tracks. They hijack other servers; they forge mail headers; they piggyback on vulnerable software (as has happened with FormMail); and they use ISPs in less well regulated countries. Thus, complaints to abuse@TheSendersISP.com will hardly ever solve the problem of genuine spam.
There are organizations that compile lists of known spammers (sometimes called black holes) that are made available to ISPs who can then decide to block them, or insert a spam-warning in the e-mail header. The problem with this approach is that there is a serious risk that genuine commercial communication could be erroneously blocked. This is effectively censorship by a third party who is involved with neither the sender nor the receiver and almost certainly hasn’t read the message itself. Nevertheless, there is something to be said for it. In one case, Virgin.Net was black-holed because one of its users sent out 250,000 junk e-mails – which clearly qualifies for spam. You could say that Virgin.Net was unjustly treated – or you could say that it should have had procedures in place to prevent such an occurrence. The real victims, given that the ISP had an estimated 250,000 customers, were the other 249,999 innocent users who suddenly found their e-mails being bounced by other ISPs.
Spam is a problem. There is no easy solution. Overreaction doesn’t help. Over-restriction doesn’t help. Each individual needs to seek a solution that works for him or her.
See also: Open relay
Faking the origin; for example, forging mail headers to make it appear that messages originated elsewhere. One spoof incident reported by CERT involved messages sent to users, supposedly from local system administrators, requesting them to change their password to the new value provided in the message. These messages were not from the administrators, but from intruders trying to steal accounts.
Web spoofing. Academics at Princeton university published a paper describing how easy it is for Web spoofers to produce a ‹fake› site that can sit between the user and his or her intended destination. Ths spoofers could receive messages and then pass them on to the true destination, and could receive replies and pass them back to the user. In this way it would be possible to ‹filter› valuable information, possibly without the parties concerned ever knowing that it had occurred.
Spyware is any product that employs a user’s Internet connection in the background without his or her knowledge, and gathers/transmits information on that user or his or her behaviour. It is a rather loose term for a class of software that is generally unrequested, hidden and unknown on the user’s PC. Its purpose is to quietly mail home with information about the user, and is usually associated with adware.
It is insidious in that it is often built into genuine and useful shareware programs – and there is often ‹small print› in the agreement that nobody reads but confirms its acceptance by the user. In this way, spyware usually defeats personal firewalls since it is specifically allowed by the user.
It often defeats Anti-virus software on legal grounds. Since the spyware is either wholly or part of a ‹commercial› product, any attempt by AV software to remove it could be considered restraint of trade. AV companies tread very carefully here.
In general, spyware is used for commercial purposes – to help the vendor build a profile of the likes and dislikes, habits and preferences of the user. At these times, spyware is more of a threat to privacy than it is to security.
However, there is another category of spyware that is altogether more worrying. This has nothing to do with adware, but is more concerned with snooping. Such systems can, for example, make a record of all the users› keystrokes (which would include, for example, web sites visited, passwords entered, and credit card numbers used) and surreptitiously mail them to the spyware ‹owner›. The developers of this type of spyware often claim legitimacy by pointing out that it could be used by worried parents wanting to keep a tab on their children’s use of the Internet.
SSID (Service Set IDentifier)
The SSID is a secret key, set by the network administrator. It identifies an 802.11 network. You need to know the SSID in order to connect to the network – and it is consequently a prime target for hackers and crackers. It has several weaknesses: vendors often provide a default that is frequently left unchanged by network administrators; it can be discovered by sniffing; and it is an administrative problem since the act of ‹locking out› one user requires that the SSID be changed for all other users.
Stacheldraht is a classic DDoS attack, using a master program and multiple agents on multiple compromised systems. In many ways it is similar to TFN, but includes encrypted communication between the attacker and the master program, and can update the agent programs using rcp (remote copy).
The world wide web is a ‹stateless› environment. This simply means that web servers (Internet websites) do not know which user’s browser is making a request for information. Such information was not necessary for the original concept of the world wide web as a means of moving data freely between different users.
However, the advent of electronic commerce over the web has made this statelessness a problem. If the user ‹buys› a product from one page, and then moves on to another page, the web server has no way of knowing which user is accessing the second page. Because of the stateless nature of the web, the purchase from the first page will be ‹forgotten›.
The solution has been the advent of cookies. Used honorably, cookies can be a good thing. Unfortunately, they are open to abuse – and they can also be a bad thing.
The ability of a virus to evade detection by various means. For example, by altering the dir command so that it shows the original length of the infected file, rather than the longer version produced as a result of the virus code hiding at the end.
The science of hiding the existence of a message, as opposed to hiding the meaning of the message (which is cryptography). Cryptanalysis, cryptography and steganography are the fundamental branches of cryptology.
Steganography will often hide a secret message within an innocent message, so that even the existence of the secret message is hidden. Historical methods could include minute variations on individual characters, tiny pin pricks on those characters, a cut-out overlay that reveals the message alone, and so on.
Today’s most popular methodology is to hide the message within a graphic image. A 64kb message could be hidden within a 1024×1024 greyscale image without affecting the visual appearance of the image.
A cipher that operates on a continuous stream of data by ciphering individual elements, such as bits or bytes, without the need to accumulate blocks of data.
Compare to block cipher.
A term given to a cryptosystem that uses a key of sufficient length, with an algorithm that has not been broken, to ensure the probability that the key cannot be discovered within a meaningful time frame.
The mathematically predictable timeframe for brute forcing the key (ie, the time it would take to test all the possible keys using all the available computing power) can be used to determine whether a particular cryptosystem is strong or weak.
* be bound to any program, thus having any size;
* be given any icon;
* be launched in many different ways on system startup;
* use any port for listening (actually the random port it chose during its installation — meaning that security scanners can no longer scan for the Sub7 port since there isn’t one;
* monitor every keystroke for the purpose of capturing online passwords, credit card numbers, e-banking passwords and so on;
* delete, create and edit files.
It can also make the victim’s machine act as a zombie used in a DDoS attack to bring down some servers (WEB, DNS, etc).Since it has been around for a long time now, most current firewalls and antivirus software can detect it.
In an automated information system (AIS), a subject is any active entity that causes information to flow among passive entities called objects. The subject could thus be a user or a process.
A person who has been granted administrator privileges, e.g. unrestricted access to the whole system, command line and files despite any permissions granted. The expression SuperUser is normally associated/more common within the UNIX environment and gives root access.
Roger Clarke defines surveillance thus: “Surveillance is the systematic investigation or monitoring of the actions or communications of one or more persons. The primary purpose of surveillance is generally to collect information about the individuals concerned, their activities, or their associates. There may be a secondary intention to deter a whole population from undertaking some kinds of activity.” (http://www.anu.edu.au/people/Roger.Clarke/DV/Intro.html)
The traditional view is that surveillance is usually undertaken by CCTV (or microphones/bugs), which can be either company-owned or State-owned. The UK is already considered to be one of, if not the, most ‹watched› nations in the world. This has little direct relevance to information security, except where it is linked to facial recognition systems and used, perhaps, as an additional form of access control within general facilities management. However, it has much to do with privacy. Many people also consider that ‹mass surveillance› linked to large scale state databases carries with it very sinister overtones.
One of the arguments put forward for the use of visible surveillance is that it conforms to the principles of CPTED (Crime Protection Through Environmental Design). Put simply, shoplifters are less likely to operate in stores that operate visible CCTV, and are more likely to get caught if they do. The same principles apply as much to the cyberworld as to the physical world: social engineering and malicious insider activity is more difficult in an environment known to be under constant surveillance.
In the cyberworld, of course, surveillance isn’t limited to CCTV and microphones, but can be achieved by sniffers, interceptions, content security, event logs and so on. Here the meaning is very similar to ‹monitoring›. Perhaps the real difference is one of targeting and intent. If the intent is to target and monitor a specific person or related group of persons, then it is surveillance. If the intent is to monitor something more general, say, performance, and to report on events and conditions that affect that performance, then it is monitoring.
Symmetric / Secret Key Cryptography
Symmetric cryptography uses the same key to encrypt and decrypt text. This key must be kept secret from everyone other than those authorized to decrypt the text – and is hence also called ‹secret key cryptography›. Compare this with asymmetric/public key cryptography.
Cryptography is the method by which the meaning of a message can be obscured from any eavesdropper, normally through a combination of several stages of substitution and transposition of the characters and symbols within a message. This is normally achieved by entering two variables known as the Plaintext (the original message) and a Key (some random number) into an encryption algorithm which then outputs the Ciphertext (scrambled message).
A SYN Flood is a form of Denial of Service (DoS) attack which attempts to prevent communications by depleting the available resources of a target host.
To open a TCP session the initiator will send a packet marked as a SYN packet to the target host. The host will then send an acknowledgment or ACK packet in return and this will also act as a SYN packet for the return communications path. Finally the initiator must ACK the target’s SYN before proper communications can continue. This process is known as a ‹Three way handshake›. Each host will use at least two ‹sockets› for each TCP session and so this process will tie up two sockets on each host until the session is terminated. Every host will have a limit to the number of open sockets which it can support either because it is built into the system or because it will run out of memory to keep track of the open sockets.
In the SYN flood attack the attacker will use a special program which sends many SYN packets in quick succession to the target each one depleting the stock of available sockets. The SYN flood program does not intend to use the session and will therefore not keep a socket open at its end and reduce its stock. Eventually the target will run out of sockets and no more communication can take place with any other host.
Hardened systems are protected from SYN Flooding either because they will not allow an unacknowledged session to live beyond a certain time or by refusing all further communications from a host which appears to be attempting a SYN Flood.
To tamper with a system is to make an unauthorized modification to the working of that system in order to subvert its security services. The system in question here could be a hardware device, the source code of an important program, financial data, or some private information like private key, password file.
Target of Evaluation (TOE)
An IT system, part of a system or product that has been identified as requiring security evaluation; ie, that which is being evaluated.
TCP sequence prediction attack
Within TCP is a sequence number field. The sequence number ensures that received packets are delivered to higher levels of the protocol stack in the correct order. If a communication is in process, then you can assume that authentication between the two systems has already been established. In theory, you can then highjack the communication and send your own packets to the destination system without any need for further authentication. To succeed, you need to be able to predict and include the correct sequence number field, and to prevent the genuine packets arriving before yours (which could be done by taking down the sending system with a DoS attack).
TCPA is the Trusted Computer Platform Alliance, formed by Compaq, HP, IBM, Intel and Microsoft. It’s mission statement is “Through the collaboration of HW, SW, communications, and technology vendors, (to) drive and implement TCPA specifications for an enhanced HW and OS based trusted computing platform that implements trust into client, server, networking, and communication platforms.”
See also: Palladium.
Trusted Computer System Evaluation Criteria: a document published by the NCSC (5200.28-STD) in 1985, containing the evaluation criteria for assessing degrees of assurance in the security features of hardware and software systems. The document is frequently referred to as ‹The Orange Book›.
It defines security in classes ranging from D (minimum) to A1 (highly-secure). These classes define the security functions required to meet a specified level of trust, and while most mainframe security systems will meet the class B criteria it is more common to see commercial products being submitted at class C2 functionality. An example of a product classified to C2 level is Windows NT.
Telecommunications Act of 1996
An Act that became US law on February 8, 1996. Its primary purpose is to stimulate competition within the telecommunications industry, but it also includes provisions that make it a criminal offence to carry pornography over the Internet in a manner that is easily accessible to children.
A terminal emulation protocol that allows remote log in across a TCP/IP-based network, which can be a LAN, a WAN or the Internet. The user must have a valid account name and password on the remote computer in order to gain access, unlike FTP, which allows access to all incoming callers.
Transient Electromagnetic Pulse Surveillance Technology: the study and control of the spurious electronic signals emitted by electrical equipment.
Computer monitors emit radiation. With the correct equipment, this radiation can be detected and interpreted from more than 1 km distance. (In the UK, similar technology is used to determine whether a household with no record of a TV licence is currently viewing a television. This can be done from the detector van in the street, and can even determine which programme is being viewed.)
TEMPEST is also a US Government program for the evaluation and endorsement of electronic equipment that is safe from this type of ‹attack›. TEMPEST certification confirms that particular equipment has passed government specifications for limiting radiation to levels that will not comprome the information being processed.
The Ministry of Defence (MOD)
In the UK, a major MOD responsibility is that of collecting and analysing military intelligence data. The staff involved are highly professional and very careful to ensure that their work does not stray over the boundary into activities not soundly based within the the statutory responsibilities of the MOD. I consider them a national asset and not a threat to the privacy of UK citizens. MOD has its own collection assets buts also relies heavily on GCHQ.
The MOD is a major client for GCHQ intelligence data and a major user of secure communications and information systems. As such it is a major client of both GCHQ and CESG. In respect of cryptographic products MOD has been CESG’s major customer and has in the past taken as much as 90% of their output.
MOD relies on CESG for the design of cryptographic algorithms and prototype designs but does most of its own development and production work through its Procurement Executive in Bristol. Except for cryptographic algorithms MOD has an independent mandate to undertake its own programme of research and development in respect of communications and information systems security.
In principle MOD does not have to apply CESG rules, or take their advice, but in practice it almost always does, even when it is aware that it is flawed. This is engineered through a careful ‹conspiracy› between CESG and GCHQ – if MOD does not accept what CESG tells them to do GCHQ then threatens to cut off MOD’s intelligence data feed on the pretext that MOD computer systems are not secure enough to handle it.
The only area of MOD to avoid this ‹blackmail› is the MOD Procurement Executive in Bristol which, because it does not need GCHQ intelligence, has been able to implement reasonably effective and reasonably secure computer systems to support its operations.
MOD staff at all levels are well aware that GCHQ advice (and that is what CESG advice is) is wasting large sums of taxpayers money but they don’t do anything about it for fear of upsetting GCHQ.
An action or event that might prejudice security.
An event, process, activity (act), substance, or quality of being perpetuated by one or more threat agents, which, when realized, has an adverse effect on organization assets, resulting in losses attributed to:
* Direct loss
* Related direct loss
* Delays or denials
* Disclosure of sensitive information
* Modification of programs or data bases
* Intangible, i.e., good will, reputation, etc.
Threats can be active (where there is a deliberate attempt to alter the authorized state of the data or system), or passive (where the threat arises from negligence, poor coding or misconfiguration). Compare this difference to active and passiveattacks.
Any person or thing, which acts, or has the power to act, to cause, carry, transmit, or support athreat.
A description of the assumptions about what attacks could be attempted against a particular cryptosystem.
A data encryption algorithm used by (UK) government departments and not available to commercial or private users.
‹Tiger Team› appears to have its origins within the US military.
Originally, it was a team of military security experts whose purpose was to penetrate security installations, physical and information, and thus test the security measures. The teams would leave cardboard signs saying ‹bomb› in critical defense installations, hand-lettered notes saying ‹Your codebooks have been stolen› (they usually weren’t) inside safes, etc. After a successful penetration, some high-ranking security type would show up the next morning for a ‹security review› and find the penetration evidence — and all hell broke loose. Serious successes of tiger teams have apparently lead to early retirement for base commanders and security officers.
There is a classic story of a tiger team’s inability to break into a particular IBM system. In full military regalia they went instead straight to IBM and secretly appropriated some IBM stationery. They then developed a fakepatchand issued it using official IBM stationery at about the time for a genuine patch. The installation manager installed it – and voila!, the tiger team had gained itsback doorinto the computer system it was testing. The story also demonstrates the importance ofsocial engineeringin hacking techniques.
More recently the term has been adopted in commercial computer-security circles to describe teams of professionalhackerspaid to test computer security. In this sense, the term seems to be being replaced by ‹ethical hackers›.
The highest of the main standard security classifications.
UK Government: The compromise of Top Secret information is likely to:
* threaten directly the internal stability of the UK or friendly countries
* lead directly to widespread loss of life
* cause exceptionally grave damage to the effectiveness or security of UK or allied forces or to the continuing effectiveness of extremely valuable security or intelligence operations
* cause exceptionally grave damage to relations with friendly governments
* or to cause severe long-term damage to the UK economy.
US Government: «loss of Top Secret data causes exceptionally grave damage to the national security», Computer Security Requirements (CSC-STD-003-85).
One of the three main properties of the Bell LaPadula security model (the others being the *-property (star property) and the simple property).
The tranquility property states that the security level of an object cannot be changed while it is being processed by a computer system.
*-property (star property)
Transmission Control Protocol/Internet Protocol (TCP/IP)
A compilation of network and transport level protocols that allow a PC to speak the same language as other PCs on the Internet or other networks.
Transport Layer Security – TLS
Transport Layer Security (TLS) is an Internet security protocol based on and very similar toSSLversion 3. It provides end-to-end encryption.
Tribal Flood Network (TFN)
Tribal Flood Network, like trinoo, is a classic DDoS attack, using a master program and multiple agents on multiple compromised systems. Unlike trinoo it can spoof the source IP for the agents, and can generate multiple types of attack (including UDP flood, TCP SYN flood, ICMP echo request flood, and ICMP directed broadcast). TFN2K is a more sophisticated version of the original TFN.
trinoo is a classic DDoS attack, using a master program and multiple agents on multiple compromised systems. The attacker activates the master program and the master activates the agents using a list of IP addresses. The attacker connects to the master program via TCP, typically telnet. The master program launches the agents, connecting to them via UDP on port 27444. The agents then simultaneously attack one or more targets by flooding the network with UDP packets.
DES, a long-standing US encryption standard is no longer considered secure. The algorithm hasn’t been cracked – but increasing computing power means that keys can be found by brute force within a matter of hours. The process to find a successor, AES, is under way. In the meantime, the recommendation is to use Triple-DES (3DES). This uses the same DES algorithm but increases the keyspace sufficiently to make a brute force attack infeasible.
“Triple-DES is the repeated application of three DES encryptions, using two or three different keys. This algorithm leverages all the security of DES while effectively lengthening the key.” Bruce Schneier.
A file integrity security program written by Gen Kim and Gene Spafford. It checks designated files and/or directories against a previously generated database, and flags any files that have been added, deleted or changed.
A computer program that has a useful function, but which also contains additional (hidden) functions. The hidden functions may secretly exploit the legitimate authorizations of the invoking process to the detriment of the system’s security; for example, by making a blind copy (BCC) of a sensitive file for the creator of the Trojan Horse. Trojan Horses are often designed to deliver a tool (perhaps a RAT) that may then be used to subvert a system’s security mechanisms; see, for example, Back Orifice.
There is a fine line between a Trojan Horse and Adware.
Trojan Horse File
Legend says that after a prolonged war between the Greeks and the Trojans, the Greeks finally surrendered, left behind a gift of truce, abandoned Troy and set off home. Their gift was a large wooden horse, which the Trojans accepted eagerly (despite the advice of their chief priest, who suspected a deception). The horse was larger than the city gates, which were duly demolished. The statue was wheeled in and the party commenced.
In fact, the Greeks had merely sailed around the closest headland to await nightfall. Also, a secret compartment inside the horse contained a select band of Greek fighters, who duly broke out under cover of darkness to initiate the slaughter. Their countrymen sailed back, piled in through the breached defences and completed the rout. Troy was lost.
And that is why it is generally a bad idea to run a program, look at a document or use a spreadsheet that someone has sent you, especially if you do not know that person, or if you were not expecting them to send it to you. If it comes from an untrusted person, then it has untrusted content, whatever words of comfort they may offer you about it. Listen to the chief priest of the Trojans. Discard it.
Even if it seems to come from someone you trust, do not use it if you were not expecting it. There are viruses (see Library File) which send email attachments as if they were you, thus lulling recipients into a false sense of security. Don’t feel bad about refusing files with programmatic content from your friends and colleagues. You can usually get the programs you need from your administrator instead of your friends, and you can ask to receive documents in RTF (see RTF) instead of accepting DOCs.
from Sophos› V-Files
Trust is an imprecise, relative and subjective term in an objective and scientific discipline. It doesn’t really fit in information security — but it’s the best we’ve got. It is used ubiquitously to imply ‹secure›. Since ‹absolutely secure› is something we strive for but cannot achieve, we use instead the imprecise word ‹trusted›. It allows us to claim that the product is safe without having to guarantee it is secure. A trusted system is to the best of our knowledge and belief a secure system (but we’re not guaranteeing it).
An illustration of this can be found in the usual definition of evaluation (even our own): a method for measuring trust. Strictly speaking, how can you formally measure the degree of an imprecise concept? But despite this, we know what is meant.
Trusted Third Party
A trustworthy organisation such as a bank, or specialist consultancy, which provides security-related services that enable transactions such as encryption, and authentication to be conducted securely.
Under various schemes being implemented or proposed by a number of governments throughout Europe and the world, companies who use strong encryption will be required to lodge copies of their encryption keys with a trusted third party in order that the keys can be divulged to law enforcements groups such as those investigating organised crime, drugs or terrorism.
Strictly speaking, a tunnel is a communication link formed by encapsulating a communication protocol’s data packets in a second protocol that normally would be carried above, or at the same layer as, the first one.
In general use, however, the term indicates the use of encryption technology to provide a secure passage across (or through) a public network — usually the Internet. Thus, a tunneling router is capable of routing traffic by encrypting and encapsulating it across the untrusted network, for eventual de-encapsulation and decryption at its destination. ‹Tunnel mode› is a mode within IPsec that encrypts an entire IP packet including the IP header.
Uninterruptible Power Supply (UPS)
A system of electrical components to provide a buffer between utility power, or other power source, and a load that requires uninterrupted, precise power. This often includes a trickle-charge battery system which permits a continued supply of electrical power during brief interruption (blackouts, brownouts, surges, electrical noise, etc.) of normal power sources.
URL – Uniform Resource Locator
This is the accepted method of defining both the location and the access method for an information object. It is typically used by Internet browsers to locate and retrieve a remote file from a remote server. Every individual file available from the Internet has a unique URL. The URL for this Dictionary is:
The part before the colon indicates the access method — in this case HTTP (the most common alternative is FTP). The two slashes indicate that the next part is the name of the server/domain; and the rest of the URL is the path to the individual file on that server. It can optionally point to a specified marker within the file by appending #markername.
USA Patriot Act (USAPA)
The PATRIOT (Provide Appropriate Tools Required to Intercept and Obstruct Terrorism) Act of 2001 was introduced in the wake of the terrorist attacks on the US World Trade Center of 11 September 2001. It was originally introduced as the Anti-Terrorism Act, but met with such stiff opposition that it was slightly modified.
Nevertheless, while LEAs consider it a major boon in the fight against terrorism, most civil liberties organizations consider it to be over-reactive and draconian. In their eyes it, and similar post 9/11 anti-terrorism acts elsewhere, destroys the very essence of the free West that the terrorists have so far been unable to destroy themselves.
The Electronic Frontier Foundation has an analysis of the Act at http://www.eff.org/Privacy/ Surveillance/ Terrorism_militias/ 20011031_eff_usa_patriot_analysis.html. Its Executive Summary expands on the following headings:
The EFF’s chief concerns with the USAPA include:
* Expanded Surveillance With Reduced Checks and Balances.
USAPA expands all four traditional tools of surveillance — wiretaps, search warrants, pen/trap orders and subpoenas. Their counterparts under the Foreign Intelligence Surveillance Act (FISA) that allow spying in the U.S. by foreign intelligence agencies have similarly been expanded.
* Overbreadth with a lack of focus on terrorism.
Several provisions of the USAPA have no apparent connection to preventing terrorism.
* Allows Americans to be More Easily Spied Upon by US Foreign Intelligence Agencies.
Just as the domestic law enforcement surveillance powers have expanded, the corollary powers under the Foreign Intelligence Surveillance Act have also been greatly expanded.
Usenet (User Network)
A public network for conferencing and discussions, comprising thousands of newsgroups and organized by topic.
User profiling is sometimes described as the fifth factor in five-factor access control. In fact, user profiling has much wider applications, and, to some minds, quite serious implications.
User profiling involves developing a profile of the user concerned. The profile can be built by a history of events and actions. It therefore involves monitoring the user’s behavior. Any behavior ‹out-of-character› would trigger a silent alarm to an administrator who could investigate further. Thus, in access control, out-of-character behavior might require further authentication, or the intervention of a supervisor.
User profiling also has a place in fraud prevention. If a credit card owner only ever uses it in one particular store and neither buys cigarettes nor alcohol, but then suddenly uses it in a different store to purchase several hundreds of cigarettes and a dozen bottles of whisky, then the till operator would receive a silent warning to pay extra attention to this transaction.
The problem, as so often, is not in the use of such technology, but in the abuse of that technology. First of all, the assembly of a user profile is by definition an invasion of privacy. Secondly, we can guarantee that the owners of the profiles will treat them as a commercial asset that can be sold to other large marketing organizations.
(User Name, User ID)
A “name”, usually unique, by which each user is known to the system. This name is assigned to each user when they register to use the system.
Uusally used in conjunction with a unique password.
A method of encoding binary data into a seven-bit all-printable character stream. Has its roots in Unix. Note that there are several variants and options. Multi-part UUE can occur by default where the object to be encoded results in a file that would be larger than 64KB. Multi-part UUE would break this up into 64KB chunks. There are several variants of UUE, including the ability to define the character table used in encoding.
This presents some difficulty for content security solutions since they have to be able to reconstruct the original object for analysis.
from Content Technologies› Guide to Content Security, 2nd Ed
UUENCODE (UUE), XXENCODE (XXE) and MIME64 (also known as BASE64) are all encoding schemes which allow binary files (which can contain any bytes in any order) to be converted into simple ASCII files (see ASCII) for transmission via email. Since text files generally pass unmolested from one mail system to another, this provides a reliable way of sending binary attachments in email. Once received, a decoding program converts the file back from its ASCII form into pure binary, thus completing the delivery of the attachment.
Most email programs automatically recognise the presence of encoded files inside an email, and present the user with an icon which can be clicked to detach or launch the attachment. “Launching” causes the attachment to be decoded, stored as a temporary file on the hard disk, and then deployed. DOC files (see DOC) are usually launched directly into Word, thus activating any virus they may contain. EXE files (see EXE) are launched directly, as programs.
Clearly, the convenience with which encoding and decoding is done in email software makes it very easy to send files to your friends and colleagues. Before doing so, always ask yourself if it is totally necessary. Sometimes, mail attachments (which embody an element of risk) are sent when a simple text email (see ASCII) would do, saving both time and worry.
from Sophos› V-Files
Visual Basic Script File
Visual Basic Script is a new programming language for Windows which builds upon the old-fashioned batch (BAT) language. Unlike BAT files, which are clumsy and rather limited, VBS programs can be as powerful as any application. In fact, they can invoke any system function, and can also start up, use and shut down other applications, such as Word, silently and invisibly.
VBS programs can also be embedded in HTML files (see HTML) for providing active content on the World Wide Web. Whilst there are security controls in place to help prevent VBS programs triggered over the Web from gaining too much power over your machine, these controls can be adjusted in such a way that would make Web-borne VBS viruses possible.
This means that you should not make changes to your computers› security settings (especially if you are moving to a less secure configuration) without a full understanding of the implications. If you do not know the implications, ask your administrator.
from Sophos› V-Files
Virus construction lab. A software package that makes it easy for anyone to create a computer virus. One simply picks a name, a trigger (eg a particular date) and an action (eg formatting the user’s hard disk) and VCL generates a virus.
VCL is not available commercially, but circulates within the computer underground.
Most anti-virus packages detect viruses created with VCL.
Virtual Private Network (VPN)
A VPN is an extension of an organization’s private network across a public network (usually the Internet). ‹Tunneling›, that is, encryption, is used to retain privacy from one point in the private network, across the Internet to another point (which could be another network, or a stand alone PC).
The level of protection afforded to the data depends primarily on:
* the encryption algorithm used,
* the length of encryption key used,
* how frequently the encryption key is changed.
VPN tunnels are created using a protocol such as L2TP, and are secured using a protocol such as IPsec.
A virus is a program or string of code that ‹infects› other executable files and has the ability to replicate itself; ie, to copy itself to other computers or disks, without being asked to do so by the computer user. A virus generally comprises three components: a mission component, a trigger component, and a self-replicating component.
The mission component
Sometimes called the ‹payload›, this is the purpose of the virus. It could be to delete files, alter content, reconfigure the system, display a graphic or graphics, or anything that a program is able to do on a computer. Sometimes the mission component is not to cause damage, but merely to prove that such a virus can be written – this category is called a ‹proof of concept› virus. On other occasions, the mission component is to overwhelm a different system. If hundreds, or even thousands of systems have been infected by the same virus, and the mission component is to bombard a different target system, then that system can soon become overloaded and unable to function correctly.
The trigger component
The trigger component is the element that activates the virus to deliver its mission component. Often, this is simply the execution of the host file or application. Other common triggers are the system date, or day of the week, or month of the year. Whenever the trigger date occurs, the payload is delivered.
The self-replicating component
This is the component that spreads the virus. The most ‹successful› self-replicating component these days is found in the mass mailing viruses (mass mailing = the ‹MM› you see in the official name of many viruses). Such a virus raids the host computer’s email address book, and then mails itself out, using the host’s own email system, to all the addresses it can find. The virus will pretend to be from one entry in the address book, and will mail itself to another entry from the same address book. In this way, the recipient will often recognise the name of the forged sender, and assume that the mail is safe.
An important sub-category of virus is the macro virus. Macros are programs used to automate repetitive functions within complex application software – such as word processors, spreadsheets and databases. A macro virus is a virus hidden within a macro associated with a particular document. They are most commonly found with MS Office applications, and are usually written in Word Basic, Visual Basic, or VBScript. A macro virus is a virus written in one of these languages and is platform independent. They infect and replicate in templates and within documents.
Boot sector and other sub-categories of virus
Some viruses infect the boot sector (boot sector virus) of a computer and either move data within the boot sector or overwrite the sector with new information. Some boot sector viruses have part of their code in the boot sector, which initiates the virus, and have the virus code on a sector marked as bad. Because the sector is marked as bad, the operating system and the applications will not attempt to use those sectors; thus, they wil not get overwritten. A stealth virus hides the modifications that it has made to files or boot records. A polymorphic virus produces varied but operational copies of itself. This is done in the hope of outwitting the Anti-Virus software. A multi-part virus infects both the boot sector of a hard drive and the executable files. A self-garbling virus attempts to hide from anti-virus software by garbling its own code. A small portion of the virus code decodes the garbled code when activated.
Note that a virus doesn’t have to do any damage to be called a virus – it simply has to attempt to copy itself. A program which may damage files or take other unauthorized actions, but does not attempt to replicate by itself, is known as a Trojan Horse.
A weakness in the software and/or hardware design that allows circumvention of the system security.
In social engineering, the vulnerability is human beings› gullibility.
A generally pejorative term for somebody who would like to be thought of as more proficient than he or she actually is. The implication is that wannabees are not actually capable of being what they want to be. It is often used to describe ‹wannabee hackers›. Script kiddies can be described as wannabee hackers or crackers.
WAP – Wireless Access Protocol
An open and global standard that allows wireless devices, such as mobile phones, to access and interact with information and services,especially the Internet. It features a micro Web browser that displays and transmits specially formatted pages.
Software designed to dial wide segments of telephone numbers, and detect which ones are modem access lines. Used to detect dial-in phone numbers.
Such products have both legitimate and illegitimate uses.
Warchalking refers to the practice of placing a chalk marking on the sidewalk or outside wall adjacent to a building with an insecure wireless network. The purpose is to indicate to other hackerscrackers that this could be a good location to gain unauthorized access to a company network and/or its resources.
Unauthorized access is always wrong — so whatever the supposed motives of warchalkers (malicious, curious, challenge), it is an action that cannot be condoned.
The slang name for pirated software (a corruption of «softwares») used by those who obtain and circulate pirated software for pleasure or profit. Used mostly in the US.
A term used to describe a potential method of generating vast amounts of anonymous spam. It particularly applies to the use of a third party’s mail server hacked via an insecure wireless network. If the hacker can gain access the mail server, it could be used to relay spam with no chance of the true originator ever being traced (unless caught in the act).
The Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies is a global, multilateral agreement approved by 33 countries in July 1996. Its purpose is to promote information exchange and cooperation in order to provide consistency in the transfer of arms and dual-use products.
Participating countries use export controls to prevent the dissemination of items on agreed lists. It is relevant to information security because governments have had a tendency to use the Wassenaar Arrangement to justify export controls on strong encryption (classifying it, in the case of the USA, as a ‹munition›).
Like so many other things in life, used correctly it is a good thing, but used incorrectly it is a bad thing.
A term given to a cryptosystem that uses a key of insufficient length to prevent the possibility or even probability that it could be cracked within a meaningful time frame. Also used to describe cryptosystems that have a flawed design, and are consequently easily broken.
Web bug is the term given to the process used to secretly pass information from the user’s computer to a third party website. If used in conjunction with cookies, web bugs can be used to gather and track data in the stateless environment of the world wide web.
A web bug is typically a one-pixel, transparent GIF image – although it could equally be any image at all. The HTML code for the web bug (the image) directs the browser to a particular site to retrieve the image, but can, at the same time, pass information to that website. The problem with web bugs is that the user may never be aware that anything untoward is happening.
Web bugs can be placed into an HTML page used for e-mail messages as many mail client programs support the display of HTML pages.
Web of Trust
Public key cryptography allows you to digitally sign electronic messages. This provides trust in communications, confirming that the sender of a message is exactly who he or she claims to be.
But this trust can only exist if you can be confident that a particular key really does belong to a particular person. One approach uses digital certificates issued by a trusted Certificate Authority.
Another approach is the so-called web of trust used by PGP. Bob knows and trusts Jim — he knows that Jim’s public key is genuine. Jim knows and trusts Alice. By extension, then Bob can also trust Alice because she has already been validated by trusted Jim. And so the web grows…
This approach seems to be less secure than the hierarchical X.509 approach, but has the great twin advantages of being without cost and easy to understand (since it actually mirrors how trust works in the real world).
Webmail describes email (such as Yahoo or Hotmail) that is accessed via your browser rather than via an email client (such as Outlook or Eudora). Your mail is held on the webmail service provider’s server until you read and delete it.
There are both advantages and disadvantages to this approach.
- Webmail accounts are generally free.
- You can access your email from anywhere you can access a PC and the Internet (an Internet CafÃ©; a hotel while on the road; home as well as the office; etcetera).
- Use of webmail is usually as anonymous as anything on the Internet can be. If your account is compromised, or you think someone has learnt your identity – discard that account and start another.
- It is an on-line service – it is difficult to compose emails off-line before sending them (although this is not really a problem with always-on broadband)
- It is awkward to save emails locally – you usually end up copying and pasting the content into a local file.
In content security, a white list is a list of sources that would, according to the ‹rules›, be considered a source of spam and normally be blocked – but are nevertheless known and trusted. The white list therefore provides a list of sources that are allowed through the filtering software despite the content of the mail.
An example might include ITsecurity.com. ITsecurity.com publishes a weekly newsletter of the problems and answers that arise in its Security Clinic. Questions frequently discuss ‹unwanted porn›, ‹blocking hackers›, ‹spam‹ and so on. ‹Porn›, ‹hack› and ‹spam› are words tha are usually blocked by content security filters. In this case they would be blocking a legitimate – indeed very useful – newsletter that has specifically been requested by the recipient. A white list would solve the problem by allowing this newsletter through to that recipient.
A TCP/IP utility that lets you query compatible servers for detailed information about other Internet users.
WiFi is a brand name controlled by the Wireless Ethernet Compatibility Alliance (WECA). It confirms conformance to the IEEE 802.11b standard for wireless LANs, and ensures interoperability between all WiFi wireless LANs.
WiFi includes basic security features for authentication and privacy.
WiFi’s privacy is based on WEP (wired equivalent privacy) encryption. It cannot be considered secure.
There are three ways (two ‹official› and one ‹unofficial›) in which access control is provided. Firstly, each node in the standard 802.11 network uses a network name referred to as the Service Set IDentifier, or SSID. Users can be required to enter a combination of the SSID and a password before being granted access. Unfortunately, this can be easily sniffed and identified by drive-by hackers.
The unofficial method is similar in concept, but uses the unique Media Access Control (MAC) identifier that is part of every Ethernet device. Many WLANs can store lists of the MACs as access control lists within the APs (Access Points). But they are never encrypted, cannot be changed, and are easily sniffed.
The third approach is good old WEP again. It can be configured to require either a 40 bit or 128 bit key. However the same key is used for all devices, and, in the case of laptops, is stored on the laptop. This means that if any device is compromised (a laptop lost or stolen?), then the access password must be changed on ALL of the devices. Needless to say, WEP does not provide any management functions to make this any easier.
802.11b is not secure and will be replaced probably by a combination of 802.1x for authentication and a new AES algorithm for encryption.
WildList, The («in the wild»)
What is The WildList… ?
Over the years, antivirus expert Joe Wells has collected reports of which viruses have been found spreading in the real world. He decided to create a list of these viruses and make that list available to the public, free of charge, to offset some of the ‹numbers games› played by some antivirus product developers.
The list was ‹The WildList›. From its humble beginnings in Joe’s garage, The WildList has grown into the world’s authority on which viruses users should really be concerned with. Used as a basis for testing antivirus software by proficient and competent testing authorities, The Wildlist remains available free to computer users worldwide.
The list is created each month by a team of volunteers, using reports from over 55 antivirus researchers and corporations world-wide.
Now, a dynamic WildList is scheduled to go online. Here, reports of viruses spreading In the Wild will also available on any given day.
On the 15th day of each month, the formal WildList is extracted from all verified reports, and published at http://www.wildlist.org. Archives of past WildLists are available in the archive.
What exactly is ‹In the Wild›?
When a virus is reported to us by two or more Reporters, it’s a pretty good indication that the virus is out there, spreading, causing real problems to users. We consider such a virus to be ‹In the Wild›.
As far as where is ‹out there›, we like the definition given by Paul Ducklin of Sophos, PLC in his paper ‹Counting Viruses›:
For a virus to be considered In the Wild, it must be spreading as a result of normal day-to-day operations on and between the computers of unsuspecting users.
This means viruses which merely exist but are not spreading are not considered ‹In the Wild›.
from The WildList Organization International FAQ
Wired Equivalent Privacy, WEP
WEP is the encryption standard that comes with WiFi LANs. It uses RC4 encryption, which is the same as that used by the security built into standard web browsers (SSL). One might think, therefore, that it is sufficiently tried and tested to be trusted. Well, there’s not a great deal wrong with RC4 — but there is a great deal wrong with its implementation within WiFi. Put simply, it should not be used in this manner. (Technically, it is a stream cipher being used where a stream cipher should not be used. A block cipher would have been better for WLANs. But RC4 was easy and cheap to implement – and with 40 bit keys it was not subject to the then existent US export laws.)
Problems with WEP were known at the end of year 2000. But in summer 2001, the well-known cryptographers Fluhrer, Mantin and Shamir (the ‹S› of RSA) published a new paper in which “we show that RC4 is completely insecure in a common mode of operation which is used in the widely deployed Wired Equivalent Privacy protocol (WEP, which is part of the 802.11 standard), in which a fixed secret key is concatenated with known IV modifiers in order to encrypt different messages. Our new passive ciphertext-only attack on this mode can recover an arbitrarily long key in a negligible amount of time…”
In simple English, this is devastating news for the security of 802.11 WLANs. Basically, there is no security. It prompted Phil Belanger, past chairman and current marketing director of WECA, to comment: “We perceive this as serious and different from the previous attacks, and we’re not going to say ‹Don’t worry about it›. However, we’ve always said that if privacy is a concern, you need to be using end-to-end security mechanisms, like virtual private networks, along with the WLAN.”
Without going into the technical details, RC4’s implementation within WiFi means that in cryptographic terms it is a trivial matter to break the encryption. To make matters worse, there is a freely available hacking tool on the Internet (AirSnort) that can do all the hard work automatically. As a result, wireless LANs using WEP encryption are as vulnerable to script kiddies (wannabee hackers without their technical expertise) as they are to genuine hackers.
But of course the real problem isn’t limited to traffic on the WLAN. Once a wireless terminal is compromised, the hacker has effectively bypassed any firewall and gained access to the entire corporate wired LAN.
World Wide Web; WWW; The Web
The World Wide Web, usually called simply ‹WWW› or ‹the Web›, was ‹invented› by Tim Berners-Lee at Cern. It was developed to provide a means for platform-independent file transfer. It is not the Internet – it uses the Internet as its medium, HTTP for file transfer, and HTML to code the files themselves.
When Tim Berners-Lee invented the World Wide Web, he probably didn’t realize that he was inventing a whole new life form. But that’s what it is. It obeys the rules – from James Lovelock’s ‹reversal of entropy› (a proposed test for the presence of life), to an autoimmune system (the ability to develop defence against viral infection).
It’s a nice conceit (in the metaphysical sense). Or metaphor, if you prefer. It can be extended. Life is subject to evolution. And so is the Web. Internet-enabled companies are like cells, and their websites are their nucleus – new cells are continually evolving: strong ones thrive and weak ones die. In the Web, it’s survival of the fittest.
That is the challenge: to be a strong cell, to grow and thrive in the Web. But it’s not easy. The Web is evolving faster than any previous life form. It started purely as a means for disseminating information – it was little more than a platform-independent data transfer mechanism. Today, just a few years later, it stands on the verge of being the lifeblood of international commerce. It has consumed, or subsumed, business itself.
Because of this, today’s website is expected to be more than just a store for publicly available files and brochures. It is more than a window into the company – it is a doorway, a portal through which customers are invited into the virtual company. It is the nucleus of the company; and if it serves the company well, the company will thrive. If the nucleus fails, the company will die.
From ‹Web Content Management›, by Kevin Townsend
The Web is the biggest single threat to information security today. It is doubtful whether any business could survive without using the Web, either with the maintenance of its own website, or extensive use of other companies› websites. Every time a single workstation connects to the Web, it is effectively connecting to a single universal untrusted network that is simultaneously home to thousands of individuals whose morality could, to say the least, be questioned.
A computer program that replicates itself and is self-propagating. Worms, as opposed to viruses, are meant to spawn in network environments. Network worms were first defined by Shoch & Hupp of Xerox in ACM Communications (March 1982).
Worms are similar to viruses, and are often discussed in the same terms, but differ in one major way. Worms are self-contained programs – they do not need to, and do not, infect other applications. But they self-replicate (like viruses) by copying themselves to other systems; and they can carry a payload that can be benign or destructive, just like viruses.
One of the most common ‹payloads› is a backdoor or remote administration tool. A backdoor is simply that; a hidden entry for remote hackers. Once installed, the hacker can come and go at will. Backdoors are often used by spammers. This passes the cost on to you, while simultaneously hiding the true location of the spammer.
Anther typical payload could be a zombie. This term can be used to describe an implanted process running in background, or the infected machine itself. Under the control of a remote hacker, zombies are used to launch simultaneous attacks against a particular target company or website. Since it could be thousands of zombies attacking at the same pre-ordained time, the effect can be devastating.
One danger in both examples is that since it is your machine, and you are responsible for your machine, it is possible that you could be held liable. It is important, therefore, that you take all steps possible to avoid receiving, or eliminate received, worms.
Worm, The (Great) Internet.
Worm, The (Great) Internet
The Great Internet Worm was written by Robert Morris and released onto the Internet on 2 November 1988. It was designed to replicate – but by all accounts Morris did not expect it to cause the havoc that ensued. A bug caused it to replicate and re-infect much faster than anticipated.
When he realised what was happening, Morris actually tried to mitigate its effects. But by this time it was too late. The only medium capable of warning people, the Internet itself, had already been brought to its knees. About 10% of all Internet traffic had already been choked.
It was several days before solutions were found and disseminated, and the Internet got back to normal. And the effect? An estimated 6000 infected systems; many millions of dollars in costs; and three years probation, 400 hours community service and a fine of $10,000 for Morris.
A basic system operation that results only in the flow of information from a subject to an object.
The Wireless Transport Layer Security (WTLS) protocol is based upon the industry-standard Transport Layer Security (TLS) protocol, formerly known as Secure Sockets Layer (SSL). WTLS is intended for use with the WAP transport protocols and has been optimized for use over narrow-band communication channels. WTLS ensures data integrity, privacy, authentication and denial-of-service protection. For Web applications that employ standard Internet security techniques with TLS, the WAP Gateway automatically and transparently manages wireless security with minimal overhead.
xDSL is the name given to the family of Digital Subscriber Line technologies
that is beginning to provide broadband facilities over existing copper lines.
One of the most popular is Asymmetric DSL (ADSL).
Excel Spreadsheet File
All the observations which apply to Word documents files (see DOC) apply to Excel spreadsheets. Excel files may contain macros, thus turning XLS files from data into programs. As with DOC files, these macros automatically become part of the system when a file is loaded.
When the first Excel virus appeared, there was some doubt as to whether it would become widespread because the extent to which XLS files were exchanged between computers was unknown. In fact, this virus (XM/Laroux) has become so common that it was Number One in the March 1999 Sophos Top Ten.
from Sophos› V-Files
See: Cross Site Scripting
The author of PGP (Pretty Good Privacy).
ZIP Archive File
ZIP files contain collections of other files. Because they allow a number of files to be delivered in a single container, and because ZIP files are compressed to save disk space and download time, they are very popular on the Internet.
Unlike JPEG files (see JPEG) or MP3 (see MP3), ZIP files do not throw away any information when they perform their compression. whilst this restricts the maximum amount of compression that can be achieved, it ensures that the original files can be restored when decompressing. Obviously, this is vital for files such a programs, which must be preserved exactly.
A ZIP file can contain viruses if any of the files that are packaged into it contain viruses. However, the archive itself is not directly dangerous, because any infected files inside it must be extracted before they can be used and therefore initiate infection.
The most effective way to handle ZIP files (and similar files such as RAR and GZIP) is to use an on-access anti-virus program. In Sophos Anti-Virus, this is the component called InterCheck. It monitors all files as they are created or used, to ensure that they are virus-free. This means that ZIP files, which may contain a large assortment of files you may never use, are not treated as threatening until infected files are extracted from them. At this point, InterCheck reports the virus as it emerges from the ZIP archive (but before it can be launched), and prevents infection.
from Sophos› V-Files
Also known as Bots or IRC Bots, these programs are used by hackers to deploy Distributed Denial of Service (DDoS) agents to various victim machines. For example, when an IRC Bot infected Windows PC is started, the Bot waits for the system to finish booting, and then connects to a previously designated IRC server. It then joins a secret IRC channel that is not visible to other users of the IRC server, and waits for a command from its ‹owner›.
Hackers create Bot-carrying email viruses and Trojan-infected Internet downloads, and do anything they can to get their Bots into other peoples› computers. Machines that receive a Zombie will most likely also receive a complementary copy of the latest version of the incredibly invasive Sub7Server Trojan. This grants the hacker who is controlling the Zombie absolute control over the victim’s machine.
A ‹pulsing zombie› is one that sends its bogus messages in periodic bursts rather than continuously.
A zoo virus is one that exists only within a research establishment (such as an anti-virus company’s research laboratories). Compare with ‹in the wild‹.