Sunday 29 July 2012

Bandwidth Estimation on Protocol Mechanisms

A simple mechanism to measure the available bandwidth on a link is the packet-pair
method. It entails sending two packets back-to-back on a link, and measuring the
inter-arrival time of those packets at the receiver. If the packets are sent on a
point-to-point link with no other traffic, the inter-arrival time measures the raw
bandwidth of the link for that size of packets. It is the absolute minimum period at
which packets of that size can be sent. Sending packets at a smaller spacing will
only queue packets at the outbound interface, with no increase in throughput. If the
packets are sent on a multiple hop path mixed with other traffic, routers on the way
may insert other packets between the two packets that were sent back-to-back,
making them arrive farther apart. The number of packets inserted is directly
proportional to the load on the outbound port each router uses to send the packets,
and does not depend on packet size if no fragmentation occurs, as time in the
routers is normally bound by protocol processing and not packet size. If packet size
is equal to the path MTU, the inter-arrival time measured at the receiver is a snapshot
of the bandwidth of the path. The interarrival time is the minimum period at which
packets can be sent that will not create a queue in any of the routers on the path.
If the load of all routers in the path is a constant, then the inverse of the inter-arrival
time defines the optimal rate to send packets through this link. The load not being a
constant, the measurement will have to be repeated from time to time to adjust the 
rate to the current conditions.

Wednesday 25 July 2012

System Objects, Attributes and Constraints of Software Vulnreability

The de nition of software vulnerability includes mismatches between the assumptions
about the environment made during the development and operation of the program,
and the environment in which the program executes. The definitions in this section
refer to these assumptions.

In a computer system, a system object is an entity that contains or receives infor-
mation, that has a unique name, and that has a set of operations that can be carried
out on it. An attribute of an object is a data component of an object. A derived
attribute of another attribute is a data component of the later attribute. A property
of an attribute is a characteristic of the attribute that can be derived from the attribute
by the application of a function to the attribute.

An attribute re nement is a nite re nement of attributes within attributes, and results in
the identification of the attributes about which assumptions are made. The attribute
refinement cannot contain a property of an attribute.

Tuesday 24 July 2012

Physical Attacks on Secure Embedded Systems

For an embedded system on a circuit board, physical attacks can be launched by
using probes to eavesdrop on inter-component communications. However, for a
system-on-chip, sophisticated microprobing techniques become necessary. The
first step in such attacks is de-packaging. De-packaging involves removal of the
chip package by dissolving the resin covering the silicon using fuming acid. The
next step involves layout reconstruction using a systematic combination of
microscopy and invasive removal of covering layers. During layout
reconstruction, the internals of the chip can be inferred at various granularities.
While higher-level architectural structures within the chip such as data and
address buses, memory and processor boundaries, etc., can be extracted with
little effort, detailed views of lower-level structures such as the instruction
decoder and ALU in a processor, ROM cells, etc., can also be obtained. Finally,
techniques such as manual microprobing or e-beam microscopy are typically
used to observe the values on the buses and interfaces of the components in
a de-packaged chip.

Physical attacks at the chip level are relatively har to use because of
their expensive infrastructure requirements (relative to other attacks).
However, they can be performed once and then used as precursors
to the design of successful non-invasive attacks. For example, layout
reconstruction is needed before performing electromagnetic radiation
monitoring around selected chip areas. Likewise, the knowledge of
ROM contents, such as cryptographic routines and control data, can
provide an attacker with information that can assist in the design of a
suitable non-invasive attack.

Sunday 22 July 2012

Sidestepping a Dictionary Attack with Username Selection

Of course, a password is only half of the required login credential. A username is
also required. While it is less likely that a dictionary word would be used as a
username, there are still some common usernames that hackers are certain to try
with a brute force attack. First among these are “admin” and “administrator”. These
names are especially dangerous since they are not only easily guessed, but the
accounts they represent are usually highly privileged administrative accounts. If the
hacker’s dictionary attack could gain access to an administrative account, he could
probably do much more damage to the system than he could if he gained access to
a regular user’s account.

Administrative accounts are not the only problem: many Web applications and
Web application frameworks create default users during installation. If the site
administrator does not remove these default users or at least change their
passwords, these accounts will be easy targets for a dictionary attack. Finally,
when users are allowed to choose their own usernames, they often choose their
email address, since it is easy to remember. Once again, the user’s laziness is a
benefit to a hacker using a brute force attack. Armed with a list of email
addresses (perhaps obtained from a spammer) and a dictionary of passwords
(easily obtained anywhere), an attacker has an excellent chance of breaking into
at least one user’s account.

Friday 20 July 2012

Personal Scalable Solutions Using Machine Lerning

It is absolutely clear that the world is full of structure. In photographs and video, objects, scenes, and events repeat over and over. Objects and scenes, in turn, have structures that characterize them objects have parts and parts have sub-parts. Many current automatic approaches to index visual information rely on specific algorithms constructed by experts. The goal of such algorithms (Visual Detectors) is to automatically label, or index visual content. While these algorithms can often be robust, it is clear that it is not possible to construct a large number of detectors by hand.

Therefore, for future applications it is desirable to build systems that allow the construction of programs that learn, from user input, how to automatically label data without the need of experts. Not only should such systems be scalable but they should also take into consideration users’ interests and subjectivity. This requires recognizing the structure inherent in visual information and exploiting such structure in computational frameworks. Without a doubt machine learning will form the basis of successful, scalable applications in the future. Those applications will change, and the way such algorithms learn will change. Perhaps learning will take place without explicit input from users.

Wednesday 18 July 2012

AIS-CI and RAI-CI Generation and Detection

The device can transmit and detect the RAI-CI and AIS-CI codes in T1 mode. These
codes are compatible with and do not interfere with the standard RAI (Yellow) and
AIS (Blue) alarms. These codes are defined in ANSI T1.403.


The AIS-CI code (alarm indication signal-customer installation) is the same for both
ESF and D4 operation. Setting the TAIS-CI bit in the TR.T1CCR1 register and the
TBL bit in the TR.T1TCR1 register causes the device to transmit the AIS-CI code.
The RAIS-CI status bit in the TR.SR4 register indicates the reception of an AIS-CI signal.

The RAI-CI (remote alarm indication-customer installation) code for T1 ESF operation
is a special form of the ESF Yellow Alarm (an unscheduled message). Setting the RAIS
-CI bit in the TR.T1CCR1 register causes the device to transmit the RAI-CI code. The
RAI-CI code causes a standard Yellow Alarm to be detected by the receiver. When
the host processor detects a Yellow Alarm, it can then test the alarm for the RAI-CI
state by checking the BOC detector for the RAI-CI flag. That flag is a 011111 code in
the 6-bit BOC message.

The RAI-CI code for T1 D4 operation is a 10001011 flag in all 24 time slots. To
transmit the RAI-CI code the host sets all 24 channels to idle with a 10001011 idle
code. Since this code meets the requirements for a standard T1 D4 Yellow Alarm, the
host can use the receive channel monitor function to detect the 100001011 code
whenever a standard Yellow Alarm is detected.

Monday 16 July 2012

Livelock-Free Operation and Forward-Progress guarantees

If two transactions in the process of committing either have a write conflict or true
data dependencies, the transaction with the lower TID always succeeds in
committing. The design of the directory guarantees this behavior. A transaction
with a higher TID will not be able to write to a directory until all transactions with
lower TID have either skipped that directory or committed. Furthermore, the
transaction cannot commit until it is sure that all directories it has speculatively
loaded from have serviced all lower numbered transactions that can potentially
send an invalidation to it. This yields a livelock-free protocol that guarantees
forward progress. Limited starvation is possible: a starved transaction keeps its
TID at violation time, thus over time it will become the lowest TID in the system.
While long transactions that retain their TID after aging may decrease system
performance, the programmer is still guaranteed correct execution. Moreover,
TCC provides a profiling environment, TAPE, which allows programmers to
quickly detect the occurrence of this rare event.

Transactions are assigned TIDs at the end of the execution phase to maximize
system throughput. This may increase the probability of starving long-running
transactions, but this is mitigated by allowing those transactions to request and
retain a TIDs after they violate, thus insuring their forward-progress. TIDs are
assigned by a global TID vendor. Distributed time stamps such as in TLR will
not work for our implementation since these mechanisms do not produce a
gap-free sequence of TIDs, rather only an ordered set of globally unique timestamps.

Sunday 15 July 2012

Non-relational Parallel Database Machines

While open research issues remain in the area of parallel database machines
for relational database systems, building a highly parallel database machine for
an object-oriented database system presents a number of new challenges.
One of the first issues to resolve is how declustering should be handled. For
example, should one decluster all sets (such as set-valued attributes of a 
complex object) or just top-level sets? Another question is how should inter
object references be handled. In a relational database machine, such references
are handled by doing a join between the two relations of interest, but in an
object-oriented DBMS references are generally handled via pointers.
In particular, a tension exists between declustering a set in order to parallelize
scan operations on that set and clustering an object and the objects it references
in order to reduce the number of disk accesses necessary to access the
components of a complex object. Since clustering in a standard object-oriented
database system remains an open research issue, mixing in declustering makes
the problem even more challenging.



Another open area is parallel query processing in an OODBMS. Most OODBMS
provide a relational-like query language based on an extension to relational algebra.
While it is possible to parallelize these operators, how should class-specific methods
be handled? If the method operates on a single object it is certainly not worthwhile
parallelizing it However, if the method operates on a set of values or objects that are
declustered, then it almost must be parallelized if one is going to avoid moving all the
data referenced to a single processor for execution. Since it is, at this point in time,
impossible to parallelize arbitrary method code, one possible solution might be to
insist that if a method is to be parallelized that it be constructed using the primitives
from the underlying algebra, perhaps embedded in a normal programming language.

Thursday 12 July 2012

Grid Storage API

The behavior of a storage system as seen by a data grid user is de ned by the
data grid storage API, which defi nes a variety of operations on storage systems
and le instances. Our understanding of the functionality required in this API is
still evolving, but it certainly should include support for remote requests to read
and/or write named fi le instances and to determine fi le instance attributes such as
size. In addition, to support optimized implementation of replica management
services (discussed below) we require a third party transfer operation used to
transfer the entire contents of a fi le instance from one storage system to another.

While the basic storage system functions just listed are relatively simple, various
data grid considerations can increase the complexity of an implementation. For
example, storage system access functions must be integrated with the security
environment of each site to which remote access is required. Robust performance
within higher-level functions requires reservation capabilities within storage
systems and network interfaces. Applications should be able to provide storage
systems with hints concerning access patterns, network performance, and so forth
that the storage system can use to optimize its behavior. Similarly, storage systems
should be capable of characterizing and monitoring their own performance; this
information, when made available to storage system clients, allows them to
optimize their behavior. Finally, data movement functions must be able to detect and
report errors. While it may be possible to recover from some errors within the
storage system, other errors may need to reported back to the remote application
that initiated the movement.

Monday 9 July 2012

Use of CAPTCHA

CAPTCHA stands for Completely Automated Public Turing Test to Tell
Computers and Humans Apart (Pinkas and Sander, 2002). In this scheme,
some challenge is put forward to the user while attempting to login. It has
been established that these challenges, for example a distorted and
cluttered image of a word with textured background, are easy for humans
to respond but rather difficult for computers (an online attacker is
essentially a programmed computer) to answer. Until recently, this scheme
was an effective countermeasure against online dictionary attacks.
However, due to recent developments in Artificial Intelligence and Computer
Vision, programs are available which can quickly interpret and answer these
challenges. EZ-Gimpy and Gimpy for example are word based CAPTCHAs
that have been broken by Greg Mori and Jitendra Malik of UC Berkeley
Computer Vision Group (Berkeley, 2004). Due to these developments, even
CAPTCHA is not considered to be a secure technique to prevent online
dictionary attacks.



A few major web based service providers who were earlier using the
CAPTCHA technique have now resorted to highly inconvenient account
locking in order to counter online dictionary attacks. Clearly, a better and
elegant method for solving this pressing problem is required.

Sunday 8 July 2012

Multicast for Multirate Wireless LANs

Most research efforts on multicasting in IEEE 802.11
WLANs have focused on improving the service reliability by
integrating ARQ mechanisms into the protocol architecture.
The Leader-Based Protocol (LBP) ARQ mechanism has
been introduced to provide the multicast service with some
level of reliability. To address the ACK implosion problem,
LBP assigns the role of group leader to the multicast receiver
exhibiting the worst signal quality in the group. The group
leader holds the responsibility to acknowledge the multicast
packets on behalf of all the multicast group members, whereas
other MTs may issue Negative Acknowledgement (NACK)
frames when they detect errors in the transmission process.

The 802.11MX reliable multicast scheme described in
uses an ARQ mechanism supplemented by a busy tone signal.
When an MT associated to a multicast group receives a
corrupted packet, it sends a NACK tone instead of actually
transmitting a NACK frame. Upon detecting the NACK tone,
the sender will retransmit the data packet. Since the 802.11MX
mechanism does not need a leader to operate, it performs
better than the LBP protocol in terms of both data throughput
and reliability. However, this mechanism is very costly since
it requires a signaling channel to send the NACK frames and
busy tones. Moreover, both LBP and 802.11MX schemes do
not adapt the multicast PHY rate to the state of receivers.

Very recently, the RAM scheme has been proposed in
for reliable multicast delivery. Similar to the LBP and
802.11MX schemes, the transmitter has first to send a RTS
frame to indicate the beginning of a multicast transmission.
However, in RAM the RTS frame is used by all the multicast
receivers to measure the Receiver Signal Strength (RSS).
Then, each multicast receiver has to send a variable length
dummy CTS frame whose length depends on the selected
PHY transmission mode. Finally, the transmitter senses the
channel to measure the collision duration and can adapt the
PHY rate transmission of the multicast data frame accordingly.
This smart solution is more practical than 802.11 MX since
it does not require a signaling channel but still requires the
use of RTS/CTS mechanism and targets reliable transmission
applications.

In SNR-based Auto Rate for Multicast (SARM) is
proposed for multimedia streaming applications. In SARM,
multicast receivers measure the SNR of periodically broadcast
beacon frames and transmit back this information to the AP.
To minimize feedback collision, the backoff time to send
this feedback increases linearly with the received SNR value.
Then, the AP selects the lowest received SNR to adapt the
PHY rate transmission. The main problem with this approach
is that the transmission mode cannot be adapted for each
multicast frame. The multicast PHY rate of SARM is adapted
at each beacon intervals. SARM does not make use of any
error recovery mechanism, such as, data retransmission.

Note that at the exception of RAM and SARM, the mechanisms
just described above only focus on solving the reliability
of the multicast service in WLANs. Only RAM and SARM
adapt the PHY transmission rate of the multicast data frames.
In this paper, we define an architecture by integrating the
following facilities: 1) the optimal channel rate adaptation
of the multicast service in IEEE 802.11 WLANs, 2) a more
reliable transmission of the multicast data, 3) the limitation
on the overhead required by the signaling mechanism, and
4) the support of heterogeneity of receivers by using different
multicast groups and hierarchical video coding. The definition
of the proposed cross layer architecture is based on the
multirate capabilities present in the PHY layer of IEEE 802.11
WLANs.

Saturday 7 July 2012

Contextual e-Commerce Knowledge

Contextual e-Commerce Knowledge (CCK) in the
negotiation life cycle contributes to the formation of the
fundamental knowledge framework of the current negotiation
context. It mainly includes buyer’s RFQ, supplier’s quotes,
negotiators’ profiles, and negotiation traces.


The proposed negotiation model supports multi-attribute
RFQ. Typically a buyer creates a RFQ for a procurement
request of product, be it either a goods or service. The RFQ
consists of a list of attributes describing the product. Each
attribute makes a reference to a physical characteristic or
negotiable condition or term. Supplier’s quote is created
respectively based on what he can offer and what the RFQ is
requesting. Proposals are messages bids exchanged between

two negotiating parties. For every proposal, the buyer will
refer to the original set of attributes from RFQ, update the
values of the attributes accordingly. This will repeat during
the bargaining process. An agreement is the final proposal
agreed by both parties if the negotiation succeeds at the end.


In general we use proposal or bid to denote RFQ, quote,
agreement, and contract. Not only that all types of proposals
are defined within the appropriate domains, each proposal is
subject to a particular concept. A concept in the procurement
context is the buyer or supplier’s perception of the product
specified in the proposal. For example, it is a norm for
suppliers to consider purchasing orders that could only be
fulfilled by a stringent time constraint. For orders that must
meet a deadline, we call them urgent orders otherwise normal
orders. Therefore we have two concepts urgent and normal in
this case. They have different specifications, e.g. a large
quantity is not required if the materials cannot be delivered on
time for the forthcoming round of production, and a discount
is not of a relevant attribute for negotiation in urgent orders.
The supplier can then specify the conditions under which a
specific concept could be offered. By preparing the possible
concepts, their corresponding specifications, and associated
constraints in advance, the process for choosing the best deal
could be delegated to the negotiation agents rather than
involving both the supplier and buyer in time-consuming
negotiation rounds.

Traders’ profiles are established to keep track of both buyer
and supplier’s information. In our negotiation model which is
buyer-centric, buyer’s profile is created to describe the
common procurement preferences of a specific buyer.
Supplier’s profile is created to record the supplier’s credit
which is used for assessment of a particular suppler by the
buyer. It is also used for the buyer to choose the appropriate
negotiation strategy in negotiating with the supplier.

A negotiation trace is a log of all the messages exchanged
between two negotiation partners in a negotiation process. For
successful negotiation in which an agreement is produced in
the end, the negotiation trace contains useful knowledge
describing the nature and progress of the negotiation.

Tuesday 3 July 2012

EMBEDDED PROCESSING ARCHITECTURES FOR SECURITY

In the past, embedded systems tended to perform one or a few fixed functions.
The trend is for embedded systems to perform multiple functions and
also to provide the ability to download new software to implement new or
updated applications in the field, rather than only in the more controlled environment
of the factory. While this certainly increases the flexibility and
useful lifetime of an embedded system, it poses new challenges in terms
of the increased likelihood of attacks by malicious parties. An embedded
system should ideally provide required security functions, implement them
efficiently and also defend against attacks by malicious parties. We discuss
these below, especially in the context of the additional challenges faced
by resource-constrained embedded systems in an environment of ubiquitous
networking and pervasive computing.

Figure 1 illustrates the architectural design space for secure embedded
processing systems. Different macro-architecture models are listed in the
first row, and described further below. These include embedded general
purpose processor (EP) vs. application-specific instruction set processor
(ASIP) vs. EP with custom hardware accelerators connected to the processor
bus, etc.). The second row details instruction-set architecture and
micro-architecture choices for tuning the base processor where appropriate.
The third row articulates security processing features that must be chosen
or designed. For example, choosing the functionality to be implemented
by custom instructions, hardware accelerators or general-purpose instruction
primitives. The fourth row involves selection of attack-resistant features
in the embedded processor and embedded system design.
This may include an enhanced memory management unit to manage a secure
memory space, process isolation architecture, additional redundant circuitry
for thwarting power analysis attacks, and fault detection circuitry.

Figure 1: Architectural design space for secure information processing

Sunday 1 July 2012

SCATTERNET PROTOCOL

We developed a new scatternet protocol (SNP) layer that
makes the Bluetooth communication transparent. A user who
wants to send data to any other device in a Bluetooth network
simply sends a packet with the address of the receiver
into the network. The SNP is responsible for finding the
shortest path through the network and to guarantee that the
packet is received by the target device. When the network
is changed the SNP is adapting and learning new paths.
Only local information is used to find the shortest path.
The SNP extracts the routing information by looking at
the data packets that are passing the Bluetooth device it
is running on. The SNP also supports broadcasts and gives
full remote control for all connected Bluetooth devices. This
allows a user to control all Bluetooth devices in a scatternet
from a single host device. To support these functionalities
we added three new mechanisms to the original Bluetooth
stack: (1) SNP addresses are new user defined addresses. (2)
SNP packets are responsible to carry the payload through
the scatternet. (3) SNP friend tables contain local routing
information that is used to forward the SNP packets towards
the receiver.


Fig. 1. SNP packets. The header contains 5 Bytes specifying the command,
the SNP addresses of the receiver and sender, the number of hops a packet
was traveling as well as 1 Byte giving the amount of payload that the packet
contains.