Thursday 27 December 2012

DHCP Servers

Initially the DHCP servers were intended to be part of the solution. The idea was to use
the DHCP server as initiator of the updates. Since this would require control over the
DHCP server, the mobility would be limited to networks under the direct control of
the solution. As the design goal was to allow mobility over the entire Internet, this
solution was abandoned about halfway through the project. However, the DHCP
servers were kept for testing purposes throughout the project.

The DHCP servers were configured as a master and a slave server. If the slave has not
gotten any signal from the master for more than 30 seconds, it assumes the master has
failed and the slave takes over. This setup ensures that if only one server fails the
clients can still get a valid IP address in a network known to the routers. The master
and slave splits the addresses of the different subnets between them to avoid address
collisions. If one server goes down, the other server takes over the responsibilities
for the other server’s addresses. This operation is reversed when the faulty server
comes online again.

Tuesday 25 December 2012

Bottom-Boot and Top-Boot Flash Devices

Some devices are organized as a few small sectors at the bottom address space,
followed by large sectors that fill the remaining space. Some devices have a few small
sectors at the top of the device’s address space and use large sectors in the lower
address space. Since boot code is typically placed in small sectors, flash memory is
some times described as bottom-boot or top-boot, depending on where the smaller
sectors are located.

Ultimately, one sector contains the memory space that the CPU accesses as a
result of a power cycle or reset (boot). This sector is usually referred to as the boot
sector. Because some CPUs have a reset vector that is at the top of memory space and
others have a reset vector at the bottom of memory space, the flash devices come in
bottom-boot and top-boot flavors. A processor that boots to the top of memory
space would probably use a top-boot device, and a processor that boots to the bottom
of its memory space would be more suited to a bottom-boot device.

When the boot sector of the flash device exists in the boot-time address space of
the CPU, it can be protected by making the boot sectors unmodifiable. Since only a
small amount of code is typically needed to provide a basic boot, there is little
wasted space if you dedicate a small sector to an unmodifiable boot routine. This
supports the ability to make as much of the flash space in-system reprogrammable
without losing the security of knowing that if all of the reprogrammable flash was
accidentally corrupted, the system would still be able to boot through the small
amount of code in the unmodifiable boot sector.

Sunday 11 November 2012

Who are ethical hackers?

Successful ethical hackers possess a variety of skills. First and foremost, they must be
completely trustworthy. While testing the security of a client’s systems, the ethical
hacker may discover information about the client that should remain secret. In many
cases, this information, if publicized, could lead to real intruders breaking into the
systems, possibly leading to financial losses. During an evaluation, the ethical hacker
often holds the “keys to the company,” and therefore must be trusted to exercise tight
control over any information about a target that could be misused. The sensitivity of
the information gathered during an evaluation requires that strong measures be taken
to ensure the security of the systems being employed by the ethical hackers
themselves: limited-access labs with physical security protection and full ceiling-to
-floor walls, multiple secure Internet connections, a safe to hold paper documentation
from clients, strong cryptography to protect electronic results, and isolated networks
for testing.

Ethical hackers typically have very strong programming and computer networking
skills and have been in the computer and networking business for several years.
They are also adept at installing and maintaining systems that use the more popular
operating systems (e.g., UNIX** or Windows NT**) used on target systems. These
base skills are augmented with detailed knowledge of the hardware and software
provided by the more popular computer and networking hardware vendors. It should
be noted that an additional specialization in security is not always necessary, as strong
skills in the other areas imply a very good understanding of how the security on
various systems is maintained. These systems management skills are necessary for the
actual vulnerability testing, but are equally important when preparing the report for
the client after the test.

Finally, good candidates for ethical hacking have more drive and patience than most
people. Unlike the way someone breaks into a computer in the movies the work that
ethical hackers do demands a lot of time and persistence. This is a critical trait, since
criminal hackers are known to be extremely patient and willing to monitor systems for
days or weeks while waiting for an opportunity. A typical evaluation may require
several days of tedious work that is difficult to automate. Some portions of the
evaluations must be done outside of normal working hours to avoid interfering with
production at “live” targets or to simulate the timing of a real attack. When they
encounter a system with which they are unfamiliar, ethical hackers will spend the
time to learn about the system and try to find its weaknesses. Finally, keeping up
with the ever-changing world of computer and network security requires continuous
education and review.

One might observe that the skills we have described could just as easily belong to a
criminal hacker as to an ethical hacker. Just as in sports or warfare, knowledge of
the skills and techniques of your opponent is vital to your success. In the computer
security realm, the ethical hacker’s task is the harder one. With traditional crime
anyone can become a shoplifter, graffiti artist, or a mugger. Their potential targets
are usually easy to identify and tend to be localized. The local law enforcement
agents must know how the criminals ply their trade and how to stop them. On the
Internet anyone can download criminal hacker tools and use them to attempt to
break into computers anywhere in the world. Ethical hackers have to know the
techniques of the criminal hackers, how their activities might be detected, and how
to stop them.

Given these qualifications, how does one go about finding such individuals? The best
ethical hacker candidates will have successfully published research papers or released
popular open-source security software. The computer security community is strongly
self-policing, given the importance of its work. Most ethical hackers, and many of the
better computer and network security experts, did not set out to focus on these issues.
Most of them were computer users from various disciplines, such as astronomy and
physics, mathematics, computer science, philosophy, or liberal arts, who took it
personally when someone disrupted their work with a hack.

Thursday 16 August 2012

Assessing the impact of a design decision


Design agents evaluate the consequences of their decisions by inferring ahead
as to whether the decision value will satisfy constraints or support goals. In doing
so, it is likely that some of the information required in the inference process is not
yet available, and therefore the agent will attempt to substitute for it with an
expectation.

Figure 1. Design expectation example

Imagine the frame design agent, in our chair design problem, making a decision
about the frame material. Before committing to the design decision the agent
may verify whether the decision will satisfy cost constraints. Therefore it will
need to know the conditions that influence the cost, and the specific correlations
between the values for the determined conditions and the cost ranges. An expectation
such as the one described in Figure 1 could be critical in validating the
agent’s decision before all the cost components are known. Alternatively, the
frame design agent may take a decision which is perfectly valid at that point, that
will be used by other agents, only to be later invalidated in a cost analysis process.

Thursday 2 August 2012

Data Hoarding

Data hoarding releases currency, therefore changes object semantics. A receiver
hoards replicas from a sender and each replica is denoted as (objectID, semantics),
where semantics is either primary or copy.

General hoarding (G-hoarding): sender owns the primary of an object, i.e.
(objectID, primary), and receiver hoards a copy from the sender. After G-hoarding,
sender still owns the primary, i.e. sender (objectID, primary), and receiver
(objectID, copy).

Primary hoarding (P-hoarding): sender owns the primary of the object, i.e. (objectID,
primary), and receiver hoards the primary from the sender. After P-hoarding, the
primary is transferred from the sender to the receiver and the one in the sender
becomes a copy, i.e. sender (objectID, copy), and receiver (objectID, primary).

Copy hoarding (C-hoarding): sender owns only the copy of the object, i.e. (objectID,
copy), and receiver hoards a copy from the sender. After C-hoarding, both the sender
and the receiver hold copies, i.e. sender (objectID, copy), and receiver (objectID, copy).

Sunday 29 July 2012

Bandwidth Estimation on Protocol Mechanisms

A simple mechanism to measure the available bandwidth on a link is the packet-pair
method. It entails sending two packets back-to-back on a link, and measuring the
inter-arrival time of those packets at the receiver. If the packets are sent on a
point-to-point link with no other traffic, the inter-arrival time measures the raw
bandwidth of the link for that size of packets. It is the absolute minimum period at
which packets of that size can be sent. Sending packets at a smaller spacing will
only queue packets at the outbound interface, with no increase in throughput. If the
packets are sent on a multiple hop path mixed with other traffic, routers on the way
may insert other packets between the two packets that were sent back-to-back,
making them arrive farther apart. The number of packets inserted is directly
proportional to the load on the outbound port each router uses to send the packets,
and does not depend on packet size if no fragmentation occurs, as time in the
routers is normally bound by protocol processing and not packet size. If packet size
is equal to the path MTU, the inter-arrival time measured at the receiver is a snapshot
of the bandwidth of the path. The interarrival time is the minimum period at which
packets can be sent that will not create a queue in any of the routers on the path.
If the load of all routers in the path is a constant, then the inverse of the inter-arrival
time defines the optimal rate to send packets through this link. The load not being a
constant, the measurement will have to be repeated from time to time to adjust the 
rate to the current conditions.

Wednesday 25 July 2012

System Objects, Attributes and Constraints of Software Vulnreability

The de nition of software vulnerability includes mismatches between the assumptions
about the environment made during the development and operation of the program,
and the environment in which the program executes. The definitions in this section
refer to these assumptions.

In a computer system, a system object is an entity that contains or receives infor-
mation, that has a unique name, and that has a set of operations that can be carried
out on it. An attribute of an object is a data component of an object. A derived
attribute of another attribute is a data component of the later attribute. A property
of an attribute is a characteristic of the attribute that can be derived from the attribute
by the application of a function to the attribute.

An attribute re nement is a nite re nement of attributes within attributes, and results in
the identification of the attributes about which assumptions are made. The attribute
refinement cannot contain a property of an attribute.

Tuesday 24 July 2012

Physical Attacks on Secure Embedded Systems

For an embedded system on a circuit board, physical attacks can be launched by
using probes to eavesdrop on inter-component communications. However, for a
system-on-chip, sophisticated microprobing techniques become necessary. The
first step in such attacks is de-packaging. De-packaging involves removal of the
chip package by dissolving the resin covering the silicon using fuming acid. The
next step involves layout reconstruction using a systematic combination of
microscopy and invasive removal of covering layers. During layout
reconstruction, the internals of the chip can be inferred at various granularities.
While higher-level architectural structures within the chip such as data and
address buses, memory and processor boundaries, etc., can be extracted with
little effort, detailed views of lower-level structures such as the instruction
decoder and ALU in a processor, ROM cells, etc., can also be obtained. Finally,
techniques such as manual microprobing or e-beam microscopy are typically
used to observe the values on the buses and interfaces of the components in
a de-packaged chip.

Physical attacks at the chip level are relatively har to use because of
their expensive infrastructure requirements (relative to other attacks).
However, they can be performed once and then used as precursors
to the design of successful non-invasive attacks. For example, layout
reconstruction is needed before performing electromagnetic radiation
monitoring around selected chip areas. Likewise, the knowledge of
ROM contents, such as cryptographic routines and control data, can
provide an attacker with information that can assist in the design of a
suitable non-invasive attack.

Sunday 22 July 2012

Sidestepping a Dictionary Attack with Username Selection

Of course, a password is only half of the required login credential. A username is
also required. While it is less likely that a dictionary word would be used as a
username, there are still some common usernames that hackers are certain to try
with a brute force attack. First among these are “admin” and “administrator”. These
names are especially dangerous since they are not only easily guessed, but the
accounts they represent are usually highly privileged administrative accounts. If the
hacker’s dictionary attack could gain access to an administrative account, he could
probably do much more damage to the system than he could if he gained access to
a regular user’s account.

Administrative accounts are not the only problem: many Web applications and
Web application frameworks create default users during installation. If the site
administrator does not remove these default users or at least change their
passwords, these accounts will be easy targets for a dictionary attack. Finally,
when users are allowed to choose their own usernames, they often choose their
email address, since it is easy to remember. Once again, the user’s laziness is a
benefit to a hacker using a brute force attack. Armed with a list of email
addresses (perhaps obtained from a spammer) and a dictionary of passwords
(easily obtained anywhere), an attacker has an excellent chance of breaking into
at least one user’s account.

Friday 20 July 2012

Personal Scalable Solutions Using Machine Lerning

It is absolutely clear that the world is full of structure. In photographs and video, objects, scenes, and events repeat over and over. Objects and scenes, in turn, have structures that characterize them objects have parts and parts have sub-parts. Many current automatic approaches to index visual information rely on specific algorithms constructed by experts. The goal of such algorithms (Visual Detectors) is to automatically label, or index visual content. While these algorithms can often be robust, it is clear that it is not possible to construct a large number of detectors by hand.

Therefore, for future applications it is desirable to build systems that allow the construction of programs that learn, from user input, how to automatically label data without the need of experts. Not only should such systems be scalable but they should also take into consideration users’ interests and subjectivity. This requires recognizing the structure inherent in visual information and exploiting such structure in computational frameworks. Without a doubt machine learning will form the basis of successful, scalable applications in the future. Those applications will change, and the way such algorithms learn will change. Perhaps learning will take place without explicit input from users.

Wednesday 18 July 2012

AIS-CI and RAI-CI Generation and Detection

The device can transmit and detect the RAI-CI and AIS-CI codes in T1 mode. These
codes are compatible with and do not interfere with the standard RAI (Yellow) and
AIS (Blue) alarms. These codes are defined in ANSI T1.403.


The AIS-CI code (alarm indication signal-customer installation) is the same for both
ESF and D4 operation. Setting the TAIS-CI bit in the TR.T1CCR1 register and the
TBL bit in the TR.T1TCR1 register causes the device to transmit the AIS-CI code.
The RAIS-CI status bit in the TR.SR4 register indicates the reception of an AIS-CI signal.

The RAI-CI (remote alarm indication-customer installation) code for T1 ESF operation
is a special form of the ESF Yellow Alarm (an unscheduled message). Setting the RAIS
-CI bit in the TR.T1CCR1 register causes the device to transmit the RAI-CI code. The
RAI-CI code causes a standard Yellow Alarm to be detected by the receiver. When
the host processor detects a Yellow Alarm, it can then test the alarm for the RAI-CI
state by checking the BOC detector for the RAI-CI flag. That flag is a 011111 code in
the 6-bit BOC message.

The RAI-CI code for T1 D4 operation is a 10001011 flag in all 24 time slots. To
transmit the RAI-CI code the host sets all 24 channels to idle with a 10001011 idle
code. Since this code meets the requirements for a standard T1 D4 Yellow Alarm, the
host can use the receive channel monitor function to detect the 100001011 code
whenever a standard Yellow Alarm is detected.

Monday 16 July 2012

Livelock-Free Operation and Forward-Progress guarantees

If two transactions in the process of committing either have a write conflict or true
data dependencies, the transaction with the lower TID always succeeds in
committing. The design of the directory guarantees this behavior. A transaction
with a higher TID will not be able to write to a directory until all transactions with
lower TID have either skipped that directory or committed. Furthermore, the
transaction cannot commit until it is sure that all directories it has speculatively
loaded from have serviced all lower numbered transactions that can potentially
send an invalidation to it. This yields a livelock-free protocol that guarantees
forward progress. Limited starvation is possible: a starved transaction keeps its
TID at violation time, thus over time it will become the lowest TID in the system.
While long transactions that retain their TID after aging may decrease system
performance, the programmer is still guaranteed correct execution. Moreover,
TCC provides a profiling environment, TAPE, which allows programmers to
quickly detect the occurrence of this rare event.

Transactions are assigned TIDs at the end of the execution phase to maximize
system throughput. This may increase the probability of starving long-running
transactions, but this is mitigated by allowing those transactions to request and
retain a TIDs after they violate, thus insuring their forward-progress. TIDs are
assigned by a global TID vendor. Distributed time stamps such as in TLR will
not work for our implementation since these mechanisms do not produce a
gap-free sequence of TIDs, rather only an ordered set of globally unique timestamps.

Sunday 15 July 2012

Non-relational Parallel Database Machines

While open research issues remain in the area of parallel database machines
for relational database systems, building a highly parallel database machine for
an object-oriented database system presents a number of new challenges.
One of the first issues to resolve is how declustering should be handled. For
example, should one decluster all sets (such as set-valued attributes of a 
complex object) or just top-level sets? Another question is how should inter
object references be handled. In a relational database machine, such references
are handled by doing a join between the two relations of interest, but in an
object-oriented DBMS references are generally handled via pointers.
In particular, a tension exists between declustering a set in order to parallelize
scan operations on that set and clustering an object and the objects it references
in order to reduce the number of disk accesses necessary to access the
components of a complex object. Since clustering in a standard object-oriented
database system remains an open research issue, mixing in declustering makes
the problem even more challenging.



Another open area is parallel query processing in an OODBMS. Most OODBMS
provide a relational-like query language based on an extension to relational algebra.
While it is possible to parallelize these operators, how should class-specific methods
be handled? If the method operates on a single object it is certainly not worthwhile
parallelizing it However, if the method operates on a set of values or objects that are
declustered, then it almost must be parallelized if one is going to avoid moving all the
data referenced to a single processor for execution. Since it is, at this point in time,
impossible to parallelize arbitrary method code, one possible solution might be to
insist that if a method is to be parallelized that it be constructed using the primitives
from the underlying algebra, perhaps embedded in a normal programming language.

Thursday 12 July 2012

Grid Storage API

The behavior of a storage system as seen by a data grid user is de ned by the
data grid storage API, which defi nes a variety of operations on storage systems
and le instances. Our understanding of the functionality required in this API is
still evolving, but it certainly should include support for remote requests to read
and/or write named fi le instances and to determine fi le instance attributes such as
size. In addition, to support optimized implementation of replica management
services (discussed below) we require a third party transfer operation used to
transfer the entire contents of a fi le instance from one storage system to another.

While the basic storage system functions just listed are relatively simple, various
data grid considerations can increase the complexity of an implementation. For
example, storage system access functions must be integrated with the security
environment of each site to which remote access is required. Robust performance
within higher-level functions requires reservation capabilities within storage
systems and network interfaces. Applications should be able to provide storage
systems with hints concerning access patterns, network performance, and so forth
that the storage system can use to optimize its behavior. Similarly, storage systems
should be capable of characterizing and monitoring their own performance; this
information, when made available to storage system clients, allows them to
optimize their behavior. Finally, data movement functions must be able to detect and
report errors. While it may be possible to recover from some errors within the
storage system, other errors may need to reported back to the remote application
that initiated the movement.

Monday 9 July 2012

Use of CAPTCHA

CAPTCHA stands for Completely Automated Public Turing Test to Tell
Computers and Humans Apart (Pinkas and Sander, 2002). In this scheme,
some challenge is put forward to the user while attempting to login. It has
been established that these challenges, for example a distorted and
cluttered image of a word with textured background, are easy for humans
to respond but rather difficult for computers (an online attacker is
essentially a programmed computer) to answer. Until recently, this scheme
was an effective countermeasure against online dictionary attacks.
However, due to recent developments in Artificial Intelligence and Computer
Vision, programs are available which can quickly interpret and answer these
challenges. EZ-Gimpy and Gimpy for example are word based CAPTCHAs
that have been broken by Greg Mori and Jitendra Malik of UC Berkeley
Computer Vision Group (Berkeley, 2004). Due to these developments, even
CAPTCHA is not considered to be a secure technique to prevent online
dictionary attacks.



A few major web based service providers who were earlier using the
CAPTCHA technique have now resorted to highly inconvenient account
locking in order to counter online dictionary attacks. Clearly, a better and
elegant method for solving this pressing problem is required.

Sunday 8 July 2012

Multicast for Multirate Wireless LANs

Most research efforts on multicasting in IEEE 802.11
WLANs have focused on improving the service reliability by
integrating ARQ mechanisms into the protocol architecture.
The Leader-Based Protocol (LBP) ARQ mechanism has
been introduced to provide the multicast service with some
level of reliability. To address the ACK implosion problem,
LBP assigns the role of group leader to the multicast receiver
exhibiting the worst signal quality in the group. The group
leader holds the responsibility to acknowledge the multicast
packets on behalf of all the multicast group members, whereas
other MTs may issue Negative Acknowledgement (NACK)
frames when they detect errors in the transmission process.

The 802.11MX reliable multicast scheme described in
uses an ARQ mechanism supplemented by a busy tone signal.
When an MT associated to a multicast group receives a
corrupted packet, it sends a NACK tone instead of actually
transmitting a NACK frame. Upon detecting the NACK tone,
the sender will retransmit the data packet. Since the 802.11MX
mechanism does not need a leader to operate, it performs
better than the LBP protocol in terms of both data throughput
and reliability. However, this mechanism is very costly since
it requires a signaling channel to send the NACK frames and
busy tones. Moreover, both LBP and 802.11MX schemes do
not adapt the multicast PHY rate to the state of receivers.

Very recently, the RAM scheme has been proposed in
for reliable multicast delivery. Similar to the LBP and
802.11MX schemes, the transmitter has first to send a RTS
frame to indicate the beginning of a multicast transmission.
However, in RAM the RTS frame is used by all the multicast
receivers to measure the Receiver Signal Strength (RSS).
Then, each multicast receiver has to send a variable length
dummy CTS frame whose length depends on the selected
PHY transmission mode. Finally, the transmitter senses the
channel to measure the collision duration and can adapt the
PHY rate transmission of the multicast data frame accordingly.
This smart solution is more practical than 802.11 MX since
it does not require a signaling channel but still requires the
use of RTS/CTS mechanism and targets reliable transmission
applications.

In SNR-based Auto Rate for Multicast (SARM) is
proposed for multimedia streaming applications. In SARM,
multicast receivers measure the SNR of periodically broadcast
beacon frames and transmit back this information to the AP.
To minimize feedback collision, the backoff time to send
this feedback increases linearly with the received SNR value.
Then, the AP selects the lowest received SNR to adapt the
PHY rate transmission. The main problem with this approach
is that the transmission mode cannot be adapted for each
multicast frame. The multicast PHY rate of SARM is adapted
at each beacon intervals. SARM does not make use of any
error recovery mechanism, such as, data retransmission.

Note that at the exception of RAM and SARM, the mechanisms
just described above only focus on solving the reliability
of the multicast service in WLANs. Only RAM and SARM
adapt the PHY transmission rate of the multicast data frames.
In this paper, we define an architecture by integrating the
following facilities: 1) the optimal channel rate adaptation
of the multicast service in IEEE 802.11 WLANs, 2) a more
reliable transmission of the multicast data, 3) the limitation
on the overhead required by the signaling mechanism, and
4) the support of heterogeneity of receivers by using different
multicast groups and hierarchical video coding. The definition
of the proposed cross layer architecture is based on the
multirate capabilities present in the PHY layer of IEEE 802.11
WLANs.

Saturday 7 July 2012

Contextual e-Commerce Knowledge

Contextual e-Commerce Knowledge (CCK) in the
negotiation life cycle contributes to the formation of the
fundamental knowledge framework of the current negotiation
context. It mainly includes buyer’s RFQ, supplier’s quotes,
negotiators’ profiles, and negotiation traces.


The proposed negotiation model supports multi-attribute
RFQ. Typically a buyer creates a RFQ for a procurement
request of product, be it either a goods or service. The RFQ
consists of a list of attributes describing the product. Each
attribute makes a reference to a physical characteristic or
negotiable condition or term. Supplier’s quote is created
respectively based on what he can offer and what the RFQ is
requesting. Proposals are messages bids exchanged between

two negotiating parties. For every proposal, the buyer will
refer to the original set of attributes from RFQ, update the
values of the attributes accordingly. This will repeat during
the bargaining process. An agreement is the final proposal
agreed by both parties if the negotiation succeeds at the end.


In general we use proposal or bid to denote RFQ, quote,
agreement, and contract. Not only that all types of proposals
are defined within the appropriate domains, each proposal is
subject to a particular concept. A concept in the procurement
context is the buyer or supplier’s perception of the product
specified in the proposal. For example, it is a norm for
suppliers to consider purchasing orders that could only be
fulfilled by a stringent time constraint. For orders that must
meet a deadline, we call them urgent orders otherwise normal
orders. Therefore we have two concepts urgent and normal in
this case. They have different specifications, e.g. a large
quantity is not required if the materials cannot be delivered on
time for the forthcoming round of production, and a discount
is not of a relevant attribute for negotiation in urgent orders.
The supplier can then specify the conditions under which a
specific concept could be offered. By preparing the possible
concepts, their corresponding specifications, and associated
constraints in advance, the process for choosing the best deal
could be delegated to the negotiation agents rather than
involving both the supplier and buyer in time-consuming
negotiation rounds.

Traders’ profiles are established to keep track of both buyer
and supplier’s information. In our negotiation model which is
buyer-centric, buyer’s profile is created to describe the
common procurement preferences of a specific buyer.
Supplier’s profile is created to record the supplier’s credit
which is used for assessment of a particular suppler by the
buyer. It is also used for the buyer to choose the appropriate
negotiation strategy in negotiating with the supplier.

A negotiation trace is a log of all the messages exchanged
between two negotiation partners in a negotiation process. For
successful negotiation in which an agreement is produced in
the end, the negotiation trace contains useful knowledge
describing the nature and progress of the negotiation.

Tuesday 3 July 2012

EMBEDDED PROCESSING ARCHITECTURES FOR SECURITY

In the past, embedded systems tended to perform one or a few fixed functions.
The trend is for embedded systems to perform multiple functions and
also to provide the ability to download new software to implement new or
updated applications in the field, rather than only in the more controlled environment
of the factory. While this certainly increases the flexibility and
useful lifetime of an embedded system, it poses new challenges in terms
of the increased likelihood of attacks by malicious parties. An embedded
system should ideally provide required security functions, implement them
efficiently and also defend against attacks by malicious parties. We discuss
these below, especially in the context of the additional challenges faced
by resource-constrained embedded systems in an environment of ubiquitous
networking and pervasive computing.

Figure 1 illustrates the architectural design space for secure embedded
processing systems. Different macro-architecture models are listed in the
first row, and described further below. These include embedded general
purpose processor (EP) vs. application-specific instruction set processor
(ASIP) vs. EP with custom hardware accelerators connected to the processor
bus, etc.). The second row details instruction-set architecture and
micro-architecture choices for tuning the base processor where appropriate.
The third row articulates security processing features that must be chosen
or designed. For example, choosing the functionality to be implemented
by custom instructions, hardware accelerators or general-purpose instruction
primitives. The fourth row involves selection of attack-resistant features
in the embedded processor and embedded system design.
This may include an enhanced memory management unit to manage a secure
memory space, process isolation architecture, additional redundant circuitry
for thwarting power analysis attacks, and fault detection circuitry.

Figure 1: Architectural design space for secure information processing

Sunday 1 July 2012

SCATTERNET PROTOCOL

We developed a new scatternet protocol (SNP) layer that
makes the Bluetooth communication transparent. A user who
wants to send data to any other device in a Bluetooth network
simply sends a packet with the address of the receiver
into the network. The SNP is responsible for finding the
shortest path through the network and to guarantee that the
packet is received by the target device. When the network
is changed the SNP is adapting and learning new paths.
Only local information is used to find the shortest path.
The SNP extracts the routing information by looking at
the data packets that are passing the Bluetooth device it
is running on. The SNP also supports broadcasts and gives
full remote control for all connected Bluetooth devices. This
allows a user to control all Bluetooth devices in a scatternet
from a single host device. To support these functionalities
we added three new mechanisms to the original Bluetooth
stack: (1) SNP addresses are new user defined addresses. (2)
SNP packets are responsible to carry the payload through
the scatternet. (3) SNP friend tables contain local routing
information that is used to forward the SNP packets towards
the receiver.


Fig. 1. SNP packets. The header contains 5 Bytes specifying the command,
the SNP addresses of the receiver and sender, the number of hops a packet
was traveling as well as 1 Byte giving the amount of payload that the packet
contains.

Wednesday 27 June 2012

Bynum about Computer Ethics

In 1989 Terrell Ward Bynum developed another broad definition of
computer ethics following a suggestion of Moor in 1985. According to
this view, computer ethics identifies and analyzes the impacts of
information technology on such social and human values as health,
wealth, work, opportunity, freedom, democracy, knowledge, privacy,
security, self-fulfillment, etc. This very broad view of computer ethics
employs applied ethics, sociology of computing, technology assessment,
computer law, and related fields. It employs concepts, theories, and
 methodologies from these and other relevant disciplines. This conception
of computer ethics is motivated by the belief that – eventually – information
technology will profoundly affect everything that human beings hold dear.

Monday 25 June 2012

Shared Queue Pattern

In many parallel algorithms, a queue is very common data structure that is
to be shared among multiple units of execution (UE). For example, many
graph traversal algorithms maintain a queue that contains a set of vertexes
that need to be traversed in the next span. Parallel graph traversal involves
efficient sharing of the queue among concurrent UEs. Also, a task queue
is commonly used as a scheduling and load‐balancing framework,
which implements the Master/Worker pattern.

Forces

1. Simple protocols vs. increased concurrency: Simple concurrency‐control
protocols employing a single synchronization construct are easy to implement
correctly and hence are less error prone. However, if concurrency‐control
protocols encompass too much of the shared queue in a single synchronization
construct, they will increase the chances that UEs will remain blocked waiting to
access the queue and will limit available concurrency. On the contrary, if the
protocols are finely tuned, they will increase the available concurrency but such
tuning requires more synchronization constructs getting complicated, thus more
error‐prone.

2. Single queue abstraction vs. multiple queue abstraction: Depending on
the memory hierarchy of the underlying hardware and on the number of UEs,
maintaining a single queue can cause excess contention and increase parallel
overhead. Solutions may need to break with the single‐queue abstraction and
use multiple or distributed queues.



XPath & XPath Filter 2.0 Transform Injection

Attack surface: Key resolution, reference resolution

Attack impact: Denial of service

Exploit scenario: Complex XPath expressions can be costly to process. XPath Filters allow Union, Intersection and Subtraction operations on an XML node set using multiple XPath selections. Intended as a performance optimization, large filter sets specifying many complex XPath expressions can quickly consume many system resources.

Mitigation: Do not process KeyInfo, or keys identified by RetrievalMethod. Restrict the total number of transforms. Reject, via out-of-band schema or DTD validation, any Reference or RetrievalMethod specifying XPath or XPath Filter 2.0 transforms unless required. Identifying content by a whole document reference or by ID is preferable.

Applies to XML Encryption? Yes

Saturday 23 June 2012

Parallelization of the Application Program

While machines like Teradata and Gamma separate the application
program running on a host processor from the database software
running on the parallel processor, both the Tandem and Bubba systems
use the same processors for both application programs and the parallel
database software. This arrangement has the disadvantage of requiring a
complete, full-function operating system on the parallel processor, but it
avoids any potential load imbalance between the two systems and allows
parallel applications. Missing, however, are tools that would allow the 
application programs themselves to take advantage of the inherent
underlying parallelism of these integrated parallel systems. While automatic
parallelization of applications programs written in Cobol may not be
feasible, library packages to facilitate explicitly parallel application programs
are needed. Support for the SQL3 NOWAIT option in which the application
can launch several SQL statements at once would be an advance. Ideally the
SPLIT and MERGE operators could be packaged so that applications
could benefit from them.

Wednesday 20 June 2012

Next-generation parallel database systems

The penetration of database technology into new application areas with different
requirements than traditional business data processing has motivated the notion
of next-generation database systems. One major objective is that
the data model to be supported must be more powerful than the relational
model, without compromising its advantages (data independence and high-level
query languages). When applied to more complex application domains such as
engineering, office information systems, and expert systems, the relational data
model exhibits limitations in terms of rule management, type system and complex
object support.

To address these issues, two important technologies, KBMSs and
OODBMSs, are currently being investigated. Initially considered antagonistic,
many believe today that a combination of their capabilities into deductive and
object-oriented database (DOOD) systems will shape next-generation, universal
database systems. For the same reasons which led to parallel relational database
systems, implementing KBMSs and OODBMSs on parallel computers can be
cost-effective. Obviously, this presents new, challenging research problems in
addition to the current issues of KBMSs and OODBMSs.

Monday 11 June 2012

Retargetable API generation

The generator has been designed to be retargetable to different host languages by
separating the implementation in a generic front-end, which produces an abstract
representation of an API, and a host language specific back-end. For each host
language, code templates need to be provided for generating automata, escaping,
and pretty-printing. For the Java and PHP back-ends the templates amount to
420 and 400 lines of code respectively.

To make the implementation of a new back-end as lightweight as possible, the
generator back-ends produces parse trees of the host API, which can be unparsed
to a source API without the need for a pretty printer for the host language. In this
way, only a syntax definition of the host language is required.

Wednesday 25 April 2012

Geometry Rendering

The sensing techniques described above produce a collection of point samples
on the surface of objects. The point cloud is usually transformed into a triangle
mesh for graphical rendering via standard mesh rendering techniques. Laser
scanning, stereo vision, and spacetime stereo can additionally capture images
of the scene which can be used to associate color with the point samples or as
a texture map for the resulting mesh. The simplest approach is to create
a triangulation of the points by connecting a point with its neighbors as determined
by the sensor layout. For example, a point corresponding to a pixel in the
camera of a structured light, stereo vision, or shaped light pulse system would
be connected to points corresponding to the neighboring pixels.

More advanced and complex procedures can fill holes, align, and merge multiple
depth maps of the same object. This creates a model that has more
complete geometry. A single depth map will only show the geometry of one
side and will have holes wherever there is occlusion. Multiple depth maps from
different viewpoints will reduce occlusion and the merging algorithm can also
guess to fill in holes. Unfortunately, most of these techniques do not work
with deformable objects because aligning the depth maps depends on the object
being rigid. A recent advance in geometry processing is able to extract
correspondences and merge point clouds of deforming objects. However, the
size of the dataset that can be processed is limited.

Monday 2 April 2012

Updating Relational Data by SPARUL Statements

In many cases, an RDF view contains quad map patterns that maps all columns
of some table into triples in such a way that sets of triples made from different
columns are “obviously” pairwise disjoint and invoked IRI classes are bijections.
E.g., quad map patterns for RDF property tables usually satisfy these restrictions
because different columns are for different predicates and column values are
used as object literals unchanged. We are presently extending SPARUL
compiler and run-time in order to make such RDF views updatable.

The translation of a given RDF graph into SQL data manipulation statement
begins with extracting all SQL values from all calculatable fields of triples and
partitioning the graph into groups of triples, one group per one distinct extracted
primary key of some source table. Some triples may become members of more
than one group, e.g., a triple may specify relation between two table rows. After
integrity check, every group is converted into one insert or delete statement.

The partitioning of N triples requires O(N lnN) operations and keeps data in
memory so it’s bad for big dump/restore operations but pretty effecient for
transactions of limited size, like individual bookeeping records, personal FOAF
files etc.

Sunday 1 April 2012

IP Address Allocation

A new node will be assigned randomly one of the lowest free addresses
contained in the FAT, which means that the IP address is assigned in an
increasing order from a randomly chosen IP Address Block. We impose to
the new joining node to obtain its IP address from at least K nodes. We
use for this purpose the threshold signature described above and the new
concept of ‘On-line Joint IP address and Public Key Certificate’. Each
allocated IP address in the network is bound to node’s identity by means
of this certificate which must be signed by the On-line CA.

After having received a signed IP Address, the new node must broadcast
a signed registration message to all nodes to be able to participate actively
in the network. Any assigned IP address which is not registered yet is
removed from the FAT and kept in the PAT, either by the K signer nodes
after having assigned this address or by all nodes after having received a
registration message for a higher IP address in the same IP Address Block.
If the registration message is received the IP address is removed from the
PAT and put in the RAT.

Figure 1. Functional Blocks of TCSAP and DPKI modules

Sunday 18 March 2012

Extensible Access Control Markup Language

XACML is an XML specification for expressing fine-grained information access
policies in XML documents or any other electronic resource.

At configuration time, XACML expresses and communicates the rules and policies
that an access-control mechanism uses to derive an access decision for a set of
subjects and attributes. By comparison, at run time SAML formulates assertions
about subjects, their attributes, and their access rights. For digital rights
management or workflow processing use cases, an application or medium can
transmit XACML rules together with the content to which access is being regulated.
If necessary, mechanisms outside XACML must but be used to enforce the
integrity of access rules and confidentiality of content.

The XACML specification defines ways to encode rules, bundle rules to policies,
and define selection and combination algorithms in cases where multiple rules and
policies apply.

Access control lists in XACML are 4-tuples—subject, target object, permitted
action, provision. The subject can include user IDs, groups, or role
names. The target object allows granularity down to a single XML document
element. The permitted action primitive can be either read, write, create, or delete.
This represents a major XACML limitation because it does not accommodate
domain-specific permission types. A provision is an action that must execute
upon a rule’s activation (for both deny and grant rules). Such actions may include
initiating log-in, requesting additional credentials, and sending an alert. The
XACML specification defines a language for formulating such provisions.

Friday 16 March 2012

XML API and Application Support

DB2 introduced a new SQL column type in the database, the XML data type.
Applications can bind various language specific data types for input and output
of XML columns or parameters. These existing language specific data types
only allow the user to work with XML as character or binary types.

In order to use XML efficiently and seamlessly, new language specific XML
types are added to the existing client interfaces. These new language specific
XML types enable the database to be more efficient and enable the database
to supply a richer API for the applications. By making XML explicit in the
application, the database will avoid unnecessary and/or unwanted code
page conversions. XML documents have an internal encoding declaration
which makes all but the XML parser’s transcoding unnecessary. Avoiding
unnecessary code page conversions is often an important performance
benefit. Additionally, transcoding an XML document without carefully
adjusting the XML encoding declaration might make the XML document
invalid.

All the major database interfaces are supporting the XML type natively, i.e.
treating XML data as XML, not as a character type. Below, we will touch
on JDBC, ODBC, .NET, and embedded SQL.

Sunday 11 March 2012

Domain specific middleware for Distributed Real-time and Embedded Systems

Domain-specific middleware services are tailored to the requirements of particular
DRE system domains, such as avionics mission computing, radar processing, online
financial trading, or distributed process control. Unlike the previous three middleware
layers—which provide broadly reusable “horizontal” mechanisms and services domain
specific middleware services are targeted at vertical markets. From both a COTS and
R&D perspective, domain-specific services are the least mature of the middleware
layers, due in part to the historical lack of distribution middleware and common
middleware service standards needed to provide a stable base upon which to create
domain-specific middleware services. Since they embody knowledge of a domain
however, domain-specific middleware services have the most potential to increase the
quality and decrease the cycle time and effort that integrators require to develop
particular classes of DRE systems.

An example of domain-specific middleware services R&D that is relevant for DRE
systems is the Boeing Bold Stroke architecture, which has been used as the open
experimentation platform on many DARPA ITO programs. Bold Stroke is an open
architecture for mission computing avionics capabilities, such as navigation, heads-up
display management, weapons targeting and release, and air frame sensor processing.
The domain specific middleware services in Bold Stroke are layered upon COTS
processors (PowerPC), network interconnects (VME), operating systems
(VxWorks), infrastructure middleware (ACE), distribution middleware (TAO), and
common middleware services (QuO and the CORBA Event Service).

Friday 9 March 2012

SECURE EMBEDDED SYSTEM DESIGN

Our experiences with personal computers and the Internet have clearly identified information security as a paramount challenge. Embedded systems, which are used pervasively in our lives, now contain our sensitive personal data, identity, and even our purchasing power, and perform several safety-critical functions. Some examples include mobile phones, MP3 players, automotive electronics, medi-cal appliances, and ubiquitous devices such as sensors and RFID tags. Unless embedded system security is adequately addressed, it will become a concern that impedes the adoption and usage of many embedded system products, applications, and services.

Several technologies have been developed to address information security (cryptography, secure communication protocols, anti-virus tools, firewalls, intrusion detection, and so on), which can be adapted to embedded systems. These technologies can be referred to as "functional" security meas-ures, since they usually specify functions that must be added to the target system without any consid-eration of how they are embodied in hardware or software.

However, they are hardly sufficient to ensure the security of embedded systems in practice. Most real security attacks do not directly take on the theoretical strength of cryptographic algorithms; instead, they target weaknesses in a system's "implementation". Moreover, embedded-system designers have to cope with security as yet another requirement, in addition to performance, power, cost, etc.

We will present an introduction to embedded system security challenges, and argue that ef-fective security solutions can be realized only if they are built-in at various stages of the design process (architecture, HW design, and SW development). The objectives of secure embedded system design will be defined from the designer's perspective as addressing various "gaps" such as

1. the assurance gap, which refers to the gap between functional security measures and truly secure implementations,

2. the security processing gap, which arises due to the processing requirements of the additional computations that must be performed for the purpose of security, and

3. the battery gap, which is a consequence of the energy consumed in performing security-related functions.

We will provide an overview of our research in this area, covering both embedded system architectures that address these gaps, and methodologies that assist in their design. We will use mobile appliances (mobile phones, PDAs) to illustrate secure embedded system design challenges, and describe MOSES, a security platform that we have developed and deployed in NEC's next-generation mobile phones.

Wednesday 7 March 2012

Sensor Networks and its Node tracking scenarios

The tracking of a tagged object through a region of space monitored by a sensor
network. There are many situations where one would like to track the location
of valuable assets or personnel. Current inventory control systems attempt to
track objects by recording the last checkpoint that an object passed through.
However, with these systems it is not possible to determine the current location
of an object. For example, UPS tracks every shipment by scanning it with a
barcode whenever it passes through a routing center. The system breaks
down when objects do not flow from checkpoint to checkpoint. In typical work
environments it is impractical to expect objects to be continually passed  through
checkpoints.

With wireless sensor networks, objects can be tracked by simply tagging them
with a small sensor node. The sensor node will be tracked as it moves through
a field of sensor nodes that are deployed in the environment at known locations.
Instead of sensing environmental data, these nodes will be deployed to sense
the RF messages of the nodes attached to various objects. The nodes can be
used as active tags that announce the presence of a device. A database can be
used to record the location of tracked objects relative to the set of nodes at
known locations. With this system, it becomes possible to ask where an
object is currently, not simply where it was last scanned.

Unlike sensing or security networks, node tracking applications will continually
have topology changes as nodes move through the network. While the connectivity
between the nodes at fixed locations will remain relatively stable, the connectivity
to mobile nodes will be continually changing. Additionally the set of nodes being
tracked will continually change as objects enter and leave the system. It is essential
that the network be able to efficiently detect the presence of new nodes that enter
the network.

Sunday 4 March 2012

Bottom-Boot and Top-Boot Flash Devices

Some devices are organized as a few small sectors at the bottom address space,
followed by large sectors that fill the remaining space. Some devices have a few small
sectors at the top of the device’s address space and use large sectors in the lower
address space. Since boot code is typically placed in small sectors, flash memory is
some times described as bottom-boot or top-boot, depending on where the smaller
sectors are located.

Ultimately, one sector contains the memory space that the CPU accesses as a
result of a power cycle or reset (boot). This sector is usually referred to as the boot
sector. Because some CPUs have a reset vector that is at the top of memory space and
others have a reset vector at the bottom of memory space, the flash devices come in
bottom-boot and top-boot flavors. A processor that boots to the top of memory
space would probably use a top-boot device, and a processor that boots to the bottom
of its memory space would be more suited to a bottom-boot device.

When the boot sector of the flash device exists in the boot-time address space of
the CPU, it can be protected by making the boot sectors unmodifiable. Since only a
small amount of code is typically needed to provide a basic boot, there is little
wasted space if you dedicate a small sector to an unmodifiable boot routine. This
supports the ability to make as much of the flash space in-system reprogrammable
without losing the security of knowing that if all of the reprogrammable flash was
accidentally corrupted, the system would still be able to boot through the small
amount of code in the unmodifiable boot sector.

Friday 2 March 2012

Fuzzy Rules for Large Feature Spaces

Many relevant applications of information mining e.g. analysis of gene expression
patterns inherently deal with high dimensional data. Similarly multimedia data
analysis and multi-relational database mining usually involve dealing with high
dimensional feature spaces.

In addition to that, data fusion and the propositionalization of multirelational
databases, which are essential to accessing data previously unavailable for many
analysis methods, usually produce high dimensional data sets. Although the
number of input variables can sometimes be reduced with preprocessing, it is
necessary to include methods that are robust to high  dimensional input data.
But while large feature spaces have to be searched, interesting relationships in
such data often involve smaller subsets of  variables and can comprehensibly
be represented by fuzzy rules.

Fuzzy rules are easily understood by the users thus fulfi lling their immediate
information needs. The fuzzy rule induction algorithm given in is speci cally suited
to dealing with large feature spaces and heterogeneous data. An advanced
version constructs a hierarchy of rule sets with different levels of complexity.
For instance this approach allows users to assess basic relations with
relatively coarse fuzzy rules while using a higher level of detail for prediction tasks.

Wednesday 29 February 2012

Requirements for a FDR System

Usually, a Facial Detection and Recognition (FDR) system has the following
capabilities :

1. Interfacing with a video source for grabbing facial images. This implies the
possibilities of starting/stopping a video stream and capturing some frames
within it;

2. Automatic detection or manual selection of human faces may be found within
the scene;

3. Manipulate (create, add, delete) a database of faces. It is also recommended
the possibility of visual database browsing;

4. Launching the recognition process by comparing the face previously detected
with the database’s faces. The classification operation is usually preceded by
some image preprocessing operations and features extraction.


Figure 3 summarizes basic operations required by a facial detection and recognition
system,



Fig. 3. Operations required by a facial detection and recognition system.

Monday 27 February 2012

Cloud Storage Systems

Cloud storage systems are a subset, but often also the foundation of cloud computing
systems and therefore share the properties listed above. There are a plethora of
different cloud storage services around and here again it is very hard to give an exact
definition. On one side there are services which enable the end user to store simple
files as a backup solution. On the other hand there are a lot of new database systems
which are commonly called key/value stores. The expression Key/value store is an
umbrella term for services which are often also described as document-oriented,
attribute-oriented, distributed hash table, and key/value database. All of these names
are a variation on the same theme which emphasize different aspects of these systems.
Because of the frequent changes in this field and and due to the fact that every few
weeks a new systems appears on the horizon, we will refer the reader to online lists to
see a comparison of such systems.

One of the key distinction characteristics of cloud storage systems compared to
traditional RDBMS is the different and diametrically opposed emphasis on consistency
and availability. We have already mentioned the CAP-theorem in the introduction. The
older traditional RDBMS are built around the historical basis that consistency is the
main aim to achieve in a database system. As a result they often struggle scaling up
vertically beyond a small number of server nodes. With many servers, the failure rate
increases, and by enforcing consistency the availability decreases.

Cloud storage systems on the other hand are built around the premise to be available
almost constantly and to scale out on a large number of nodes. This is necessary to
serve the needs of highly demanded web services which operate on a worldwide scale.
Yet high availability can only be achieved by cutting back at the consistency guarantee.
Instead of full ACID guarantees as traditional RDBMS most cloud storage systems
support BASE properties. BASE stands for basically available, soft-state and eventual
consistency.

Saturday 25 February 2012

How Does DISCOVIR Perform Content-Based Image Retrieval?

We briefly outline the process of sharing and retrieving images in DISCOVIR
(DIStributed COntent-based Visual Information Retrieval). First, each peer is
responsible to perform feature extraction, for example, color, texture, shape, and
so on, on its shared images in each peer using the DISCOVIR client program.
With this program, each peer maintains its local index of feature vectors of its
image collection. When a peer, the requester, initiates a query by giving an example
image and a particular feature extraction method, it performs feature extraction on
the example image and sends the feature vector, contained in a query message, to
all its connecting peers. Consequently, other peers compare this query to their
feature vector index. Based on a distance measure, they find a set of similar images
and return results back to the requester. Likewise, these peers will propagate the
query to their connecting peers and this process continues to query other peers in a
large portion of the network.

DISCOVIR uses a plug-in architecture to support different feature extraction methods.
User may choose to download a plug-in in the format of compiled Java bytecode if
they want to perform CBIR based on that particular feature extraction method. For a
screen capture of the DISCOVIR client program, which can be downloaded from the
site [MIPLAB]. In the following, we describe the overall architecture of DISCOVIR,
which operates on the current Gnutella network, and the modification of query messages.

Gnutella Message Modification : The DISCOVIR system is compatible with the
Gnutella (v0.4) protocol [Gnutella]. In order to support the image query functionalities
mentioned, two types of messages are added. They are:

--ImageQuery—Special type of Query message. It carries the name of the feature
extraction method and the feature vector of the query image.

--ImageQueryHit—Special type of QueryHit message. It responds to the Image
Query message. It contains the location, file name and size of similar images retrieved,
and their similarity measure to the query. In addition, the location information of
corresponding thumbnails is added for the purpose of previewing the result set in a
faster speed .

Thursday 23 February 2012

Signals and Systems Using MATLAB

SSUM (Signals and Systems Using MATLAB) is a continually expanding suite
of exploratory demonstrations and applications. It demonstrates essential principles
and concepts of MSP without requiring advanced mathematics. To use SSUM,
MATLAB (matrix laboratory) must be installed as well as the signal processing
toolbox.

SSUM currently has 37 programs illustrating concepts of wave forms, modulation,
sampling and interpolation, aliasing, the frequency domain, convolution and
filtering, pole-zero diagrams, analysis and synthesis, signal features, and many
others. Many of these are applicable to sounds and images. There are also
demonstrations of curious topics such as sound cross synthesis, reverberation
using all-pass filters, additive synthesis of birdsong, sine wave speech synthesis,
and modeling of musical instruments.

SSUM is perfect for use in lectures, labs, and assignments. All SSUM programs
are wrapped in GUIs, so there is no need for typing unwieldy commands at the
prompt. Many of the applications are integrated as well. For instance, if one is
creating a waveform in an application, it can be sent to another application for
filtering, and to another to see its frequency content.

SSUM uses and extends the programming style used in the excellent “MATLAB
Auditory Demonstrations” application. Modularizing the code and keeping the
GUI separate from the functionality makes SSUM much more manageable.
When a new application is desired, it is quite easy to copy and reuse the functionality.

Tuesday 21 February 2012

Web Server Caching vs Proxy Caching

There is a significant difference between document caching in primary and
proxy Web servers. In proxy servers, caching is done on a proxy server’s
disk (main memory is primarily used for meta-data and network buffers).
The set of requested documents is huge (potentially the whole World Wide
Web may need to be cached). Primary Web servers, on the contrary, need
to cache a relatively small set of local documents in main memory. Also,
primary Web servers have more skewed access patterns (popular documents
are requested much more often than unpopular ones). This allows for better
cache utilization. Direct application of the proxy caching policies to primary
Web servers may not result in the best performance.

In Web proxies, CPU overhead is not very important because almost all
cached documents are stored on disk and the time to maintain auxiliary cache
data structures is much smaller than the time to read a file from disk. In
primary Web servers, documents are cached in memory. Hence, the internal
complexity of the caching algorithm becomes an important issue.