Saturday, September 29, 2012

Unicast example in Contiki

Simple unicast example, Click button in each node to unicast message.

Please consider that this code is written for cooja simulator.

UniA.c for node 1
#include "contiki.h"
#include "net/rime.h"
#include "dev/button-sensor.h"
#include "dev/leds.h"
#include <stdio.h>

PROCESS(example_unicast_process, "Example unicast");
AUTOSTART_PROCESSES(&example_unicast_process);

static void
recv_uc(struct unicast_conn *c, const rimeaddr_t *from)
{
  printf("broadcast message received from %d.%d: '%s'\n",from->u8[0], from->u8[1], (char *)packetbuf_dataptr());
}

static const struct unicast_callbacks unicast_callbacks = {recv_uc};

static struct unicast_conn uc;

static void unicast_message()
{
    unicast_open(&uc, 146, &unicast_callbacks);

    rimeaddr_t addr;
    
    packetbuf_copyfrom("AAAAA", 5);
    addr.u8[0] = 2;
    addr.u8[1] = 0;
    if(!rimeaddr_cmp(&addr, &rimeaddr_node_addr))
    {
      unicast_send(&uc, &addr);
    }
}

PROCESS_THREAD(example_unicast_process, ev, data)
{
  PROCESS_EXITHANDLER(unicast_close(&uc);)
    
  PROCESS_BEGIN();
  
  while(1) {
    PROCESS_WAIT_EVENT_UNTIL(ev == sensors_event && data == &button_sensor);
    unicast_message();
    printf("message sent.\n");
  }  
    
  PROCESS_END();
}

UniB.c for node2
#include "contiki.h"
#include "net/rime.h"
#include "dev/button-sensor.h"
#include "dev/leds.h"
#include <stdio.h>

PROCESS(example_unicast_process, "Example unicast");
AUTOSTART_PROCESSES(&example_unicast_process);

static void
recv_uc(struct unicast_conn *c, const rimeaddr_t *from)
{
  printf("broadcast message received from %d.%d: '%s'\n",from->u8[0], from->u8[1], (char *)packetbuf_dataptr());
}

static const struct unicast_callbacks unicast_callbacks = {recv_uc};

static struct unicast_conn uc;

static void unicast_message()
{
    unicast_open(&uc, 146, &unicast_callbacks);

    rimeaddr_t addr;
    
    packetbuf_copyfrom("BBBBB", 5);
    addr.u8[0] = 1;
    addr.u8[1] = 0;
    if(!rimeaddr_cmp(&addr, &rimeaddr_node_addr))
    {
      unicast_send(&uc, &addr);
    }
}

PROCESS_THREAD(example_unicast_process, ev, data)
{
  PROCESS_EXITHANDLER(unicast_close(&uc);)
    
  PROCESS_BEGIN();
  
  while(1) {
    PROCESS_WAIT_EVENT_UNTIL(ev == sensors_event && data == &button_sensor);
    unicast_message();
    printf("message sent.\n");
  }  
    
  PROCESS_END();
}

Friday, September 14, 2012

Prime Fields Example

In cryptography, prime fields play major role in its mathematical problems. Below you can see a simple example of a prime field 29 which denoted by F29.

The elements of F29 are {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27 28}

For any integer a, a mod p shall denote the unique integer remainder r , 0 ≤ r ≤ p − 1, obtained upon dividing a by p; this operation is called reduction modulo p.

(i) Addition: 17 + 20 = 8 since 37 mod 29 = 8

(ii) Subtraction: 17 − 20 = 26 since −3 mod 29 = 26

(iii) Multiplication: 17 · 20 = 21 since 340 mod 29 = 21

(iv) Inversion: 17−1 = 12 since 17 · 12 mod 29 = 1

Monday, September 10, 2012

RSA Encryption Scheme

RSA, named after its inventors Rivest, Shamir and Adleman, was proposed in 1977 shortly after the discovery of public-key cryptography. 


RSA key pair generation

INPUT: Security parameter l.
OUTPUT: RSA public key (n, e) and private key d.
   1. Randomly select two primes p and q of the same bitlength l/2.
   2. Compute n = pq and φ = ( p − 1)(q − 1).
   3. Select an arbitrary integer e with 1 < e < φ and gcd(e, φ) = 1.
   4. Compute the integer d satisfying 1 < d < φ and ed ≡ 1 (mod φ).
   5. Return(n, e, d)

Basic RSA encryption

INPUT: RSA public key (n, e), plaintext m ∈ [0, n − 1].
OUTPUT: Ciphertext c.
   1. Compute c = me mod n.
   2. Return(c)

Basic RSA decryption

INPUT: RSA public key (n, e), RSA private key d, ciphertext c.
OUTPUT: Plaintext m.
   1. Compute m = cd mod n.
   2. Return(m)


Implementations in Java
Implementations in .NET

Sunday, September 9, 2012

RSA Digital Signature Scheme

RSA, named after its inventors Rivest, Shamir and Adleman, was proposed in 1977 shortly after the discovery of public-key cryptography. 

RSA key pair generation

INPUT: Security parameter l.
OUTPUT: RSA public key (n, e) and private key d.
   1. Randomly select two primes p and q of the same bitlength l/2.
   2. Compute n = pq and φ = ( p − 1)(q − 1).
   3. Select an arbitrary integer e with 1 < e < φ and gcd(e, φ) = 1.
   4. Compute the integer d satisfying 1 < d < φ and ed ≡ 1 (mod φ).
   5. Return(n, e, d).

Basic RSA signature generation

INPUT: RSA public key (n, e), RSA private key d, message m.
OUTPUT: Signature s.
   1. Compute h = H (m) where H is a hash function.
   2. Compute s = hd mod n.
   3. Return(s).

Basic RSA signature verification

INPUT: RSA public key (n, e), message m, signature s.
OUTPUT: Acceptance or rejection of the signature.
   1. Compute h = H (m).
   2. Compute h` = se mod n.
   3. If h = h` then return(“Accept the signature”);
       Else return(“Reject the signature”).

Implementations in Java
Implementations in .NET



Wednesday, August 29, 2012

Solution for New XAMPP security concept in ubuntu

When you are using phpmyadmin in XAMPP 1.8.0, you will be faceed this problem. To avoid this you will need to do some changes in httpd-xampp.conf file.



Open file as super user

sudo gedit /opt/lampp/etc/extra/httpd-xampp.conf

It will looks like this

1:  <IfDefine PHP4>  
2:  LoadModule php4_module    modules/libphp4.so  
3:  </IfDefine>  
4:  <IfDefine PHP5>  
5:  LoadModule php5_module    modules/libphp5.so  
6:  </IfDefine>  
7:  # Disabled in XAMPP 1.8.0-beta2 because of current incompatibilities with Apache 2.4  
8:  # LoadModule perl_module    modules/mod_perl.so  
9:  Alias /phpmyadmin "/opt/lampp/phpmyadmin"  
10:  Alias /phpsqliteadmin "/opt/lampp/phpsqliteadmin"  
11:  # since XAMPP 1.4.3  
12:  <Directory "/opt/lampp/phpmyadmin">  
13:    AllowOverride AuthConfig Limit  
14:    Order allow,deny  
15:    Allow from all  
16:  </Directory>  
17:  <Directory "/opt/lampp/phpsqliteadmin">  
18:    AllowOverride AuthConfig Limit  
19:    Order allow,deny  
20:    Allow from all  
21:  </Directory>  
22:  # since LAMPP 1.0RC1  
23:  AddType application/x-httpd-php .php .php3 .php4  
24:  XBitHack on  
25:  # since 0.9.8 we've mod_perl  
26:  <IfModule mod_perl.c>  
27:      AddHandler perl-script .pl  
28:            PerlHandler ModPerl::PerlRunPrefork  
29:            PerlOptions +ParseHeaders  
30:      PerlSendHeader On  
31:  </IfModule>  
32:  # demo for mod_perl responsehandler  
33:  #PerlModule Apache::CurrentTime  
34:  #<Location /time>  
35:  #   SetHandler modperl  
36:  #   PerlResponseHandler Apache::CurrentTime  
37:  #</Location>  
38:  # AcceptMutex sysvsem is default but on some systems we need this  
39:  # thanks to jeff ort for this hint  
40:  #AcceptMutex flock  
41:  #LockFile /opt/lampp/logs/accept.lock  
42:  # this makes mod_dbd happy - oswald, 02aug06  
43:  # mod_dbd doesn't work in Apache 2.2.3: getting always heaps of "glibc detected *** corrupted double-linked list" on shutdown - oswald, 10sep06  
44:  #DBDriver sqlite3  
45:  #  
46:  # New XAMPP security concept  
47:  #  
48:  <LocationMatch "^/(?i:(?:xampp|security|licenses|phpmyadmin|webalizer|server-status|server-info))">  
49:       Order deny,allow  
50:       Deny from all  
51:       Allow from ::1 127.0.0.0/8 \  
52:  fc00::/7 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 \  
53:  81.196.40.94/32  
54:       ErrorDocument 403 /error/XAMPP_FORBIDDEN.html.var  
55:  </LocationMatch>  


2. Change according to red colored changes
1:  <IfDefine PHP4>  
2:  LoadModule php4_module    modules/libphp4.so  
3:  </IfDefine>  
4:  <IfDefine PHP5>  
5:  LoadModule php5_module    modules/libphp5.so  
6:  </IfDefine>  
7:  # Disabled in XAMPP 1.8.0-beta2 because of current incompatibilities with Apache 2.4  
8:  # LoadModule perl_module    modules/mod_perl.so  
9:  Alias /phpmyadmin "/opt/lampp/phpmyadmin"  
10:  Alias /phpsqliteadmin "/opt/lampp/phpsqliteadmin"  
11:  # since XAMPP 1.4.3  
12:  <Directory "/opt/lampp/phpmyadmin">  
13:    AllowOverride AuthConfig Limit  
14:    Require all granted  
15:    Order allow,deny  
16:    Allow from all  
17:  </Directory>  
18:  <Directory "/opt/lampp/phpsqliteadmin">  
19:    AllowOverride AuthConfig Limit  
20:    Require all granted  
21:    Order allow,deny  
22:    Allow from all  
23:  </Directory>  
24:  # since LAMPP 1.0RC1  
25:  AddType application/x-httpd-php .php .php3 .php4  
26:  XBitHack on  
27:  # since 0.9.8 we've mod_perl  
28:  <IfModule mod_perl.c>  
29:      AddHandler perl-script .pl  
30:            PerlHandler ModPerl::PerlRunPrefork  
31:            PerlOptions +ParseHeaders  
32:      PerlSendHeader On  
33:  </IfModule>  
34:  # demo for mod_perl responsehandler  
35:  #PerlModule Apache::CurrentTime  
36:  #<Location /time>  
37:  #   SetHandler modperl  
38:  #   PerlResponseHandler Apache::CurrentTime  
39:  #</Location>  
40:  # AcceptMutex sysvsem is default but on some systems we need this  
41:  # thanks to jeff ort for this hint  
42:  #AcceptMutex flock  
43:  #LockFile /opt/lampp/logs/accept.lock  
44:  # this makes mod_dbd happy - oswald, 02aug06  
45:  # mod_dbd doesn't work in Apache 2.2.3: getting always heaps of "glibc detected *** corrupted double-linked list" on shutdown - oswald, 10sep06  
46:  #DBDriver sqlite3  
47:  #  
48:  # New XAMPP security concept  
49:  #  
50:  <LocationMatch "^/(?i:(?:xampp|security|licenses|phpmyadmin|webalizer|server-status|server-info))">  
51:       Order deny,allow  
52:       Allow from all  
53:       ErrorDocument 403 /error/XAMPP_FORBIDDEN.html.var  
54:  </LocationMatch>  

Then restart the XAMPP again.

If you consider about security, do not use this method

Wednesday, June 13, 2012

Enable hibernate in ubuntu 12.04

Hibernate is disable by default in ubuntu 12.04 because of a bug. Follow below instructions to enable hibernate back.

Get the terminal and copy & paste this code
$gedit /var/lib/polkit-1/localauthority/10-vendor.d/com.ubuntu.desktop.pkla

then it will prompt a new file in gedit and do changes like below

[Re-enable hibernate]
Identity=unix-user:*
Action=org.freedesktop.upower.hibernate
ResultActive=yes

Restart computer and hibernate will be back.

Thursday, June 7, 2012

Most trusted universities for online activity

The top ten most trusted universities according to iovation, with most trusted being number one, are:


 1. University of California, San Francisco
 2. Columbia University
 3. Cornell University
 4. University of Texas
 5. University of Chicago
 6. University of California, Los Angeles
 7. Northwestern University
 8. Texas A&M University
 9. University of Utah
10. University of Virginia.


Source

Tuesday, June 5, 2012

Canny and Sobel Edge Detection in C#

Sobel and Canny are major edge detection algorithms in Image Processing. Here I have implemented those algorithms using c#.

Download the source code from here.

Canny Edge Detection (click to zoom)

Sobel Edge Detection (click to zoom)




You can improve the program by using optimization methods such as threading and loop optimization.

Monday, June 4, 2012

Array addition using OpenMP

OpenMP is an application programming interface (API) that supports multi-platform shared memory multiprocessing programming.

To compile this in linux environment
$gcc -fopenmp omp_add.c -o omp_add

Then you need to define number of threads:
$export OMP_NUM_THREADS=10

To run program:
$./omp_add

Here is omp_add.c
#include <omp.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>

int main (int argc, char *argv[]) {
  
    int i, tid, nthreads, n = 10, N = 100000000;
    double *A, *B, tResult, fResult;
    
    time_t start, stop;
    clock_t ticks; long count;

      A = (double *) malloc(N*sizeof(double));
      B = (double *) malloc(N*sizeof(double));

      for (i=0; i<N; i++) {
          A[i] = (double)(i+1);
        B[i] = (double)(i+1);
      }

    time(&start);

    /*
    //this block use single process
    for (i=0; i<N; i++)
    {
            fResult = fResult + A[i] + B[i];
    }
    
    */
    
    //begin of parallel section
    
    #pragma omp parallel private(tid, i,tResult) shared(n,A,B,fResult)
    {
        tid = omp_get_thread_num();
        if (tid == 0) 
        {
            nthreads = omp_get_num_threads();
            printf("Number of threads = %d\n", nthreads);
        }

    #pragma omp for schedule (static, n)
        for (i=0; i < N; i++) {
            tResult = tResult + A[i] + B[i];
        }

    #pragma omp for nowait
        for (i=0; i < n; i++) 
        {
            printf("Thread %d does iteration %d\n", tid, i);
        }
        
    #pragma omp critical 
        fResult = fResult + tResult; 
    }
    //end of parallel section
    
    time(&stop);

      printf("%f\n",fResult);
      
       printf("Finished in about %.0f seconds. \n", difftime(stop, start));
  
     exit(0);
}

Special thanks for Dr. M.C. Jayawardena - BSc (Col), PhD(Uppsala), MIEEE, AMCS(SL) (Lecturer)

For more examples

Sunday, June 3, 2012

Matrix Multiplication using MPI with C

Here I'll give you a code for matrix multiplication using Message passing interface (MPI). If you are dealing with parallel computing MPI will take major role. Before run the MPI codes you will need to have MPI environment. In my case I am using university cluster.

Here is code

 /**********************************************************************  
  * MPI-based matrix multiplication AxB=C   
  *********************************************************************/  
 #include <stdio.h>  
 #include "mpi.h"  
 #define N 500    /* number of rows and columns in matrix */  
 MPI_Status status;  
 double a[N][N],b[N][N],c[N][N];       
 main(int argc, char **argv)   
 {  
  int numtasks,taskid,numworkers,source,dest,rows,offset,i,j,k,remainPart,originalRows;  
  struct timeval start, stop;  
  MPI_Init(&argc, &argv);  
  MPI_Comm_rank(MPI_COMM_WORLD, &taskid);  
  MPI_Comm_size(MPI_COMM_WORLD, &numtasks);  
  numworkers = numtasks-1;  
  /*---------------------------- master ----------------------------*/  
  if (taskid == 0) {  
   for (i=0; i<N; i++) {  
    for (j=0; j<N; j++) {    
     a[i][j]= 1.0;  
     b[i][j]= 2.0;  
    }  
   }  
   gettimeofday(&start, 0);  
   /* send matrix data to the worker tasks */  
   rows = N/numworkers;  
   offset = 0;  
   remainPart = N%numworkers;  
   for (dest=1; dest<=numworkers; dest++)   
   {          
    if (remainPart > 0)  
    {      
      originalRows = rows;  
      ++rows;  
      remainPart--;  
      MPI_Send(&offset, 1, MPI_INT, dest, 1, MPI_COMM_WORLD);  
      MPI_Send(&rows, 1, MPI_INT, dest, 1, MPI_COMM_WORLD);  
      MPI_Send(&a[offset][0], rows*N, MPI_DOUBLE,dest,1, MPI_COMM_WORLD);  
      MPI_Send(&b, N*N, MPI_DOUBLE, dest, 1, MPI_COMM_WORLD);  
      offset = offset + rows;   
      rows = originalRows;  
    }  
    else  
    {      
        MPI_Send(&offset, 1, MPI_INT, dest, 1, MPI_COMM_WORLD);  
        MPI_Send(&rows, 1, MPI_INT, dest, 1, MPI_COMM_WORLD);  
        MPI_Send(&a[offset][0], rows*N, MPI_DOUBLE,dest,1, MPI_COMM_WORLD);  
        MPI_Send(&b, N*N, MPI_DOUBLE, dest, 1, MPI_COMM_WORLD);  
        offset = offset + rows;  
    }  
   }  
   /* wait for results from all worker tasks */  
   for (i=1; i<=numworkers; i++)      
   {              
    source = i;  
    MPI_Recv(&offset, 1, MPI_INT, source, 2, MPI_COMM_WORLD, &status);  
    MPI_Recv(&rows, 1, MPI_INT, source, 2, MPI_COMM_WORLD, &status);  
    MPI_Recv(&c[offset][0], rows*N, MPI_DOUBLE, source, 2, MPI_COMM_WORLD, &status);  
   }  
   gettimeofday(&stop, 0);  
   /* printf("Here is the result matrix:\n");  
   for (i=0; i<N; i++) {   
    for (j=0; j<N; j++)   
     printf("%6.2f  ", c[i][j]);  
    printf ("\n");  
   }  
  */  
   fprintf(stdout,"Time = %.6f\n\n",  
      (stop.tv_sec+stop.tv_usec*1e-6)-(start.tv_sec+start.tv_usec*1e-6));  
  }   
  /*---------------------------- worker----------------------------*/  
  if (taskid > 0) {  
   source = 0;  
   MPI_Recv(&offset, 1, MPI_INT, source, 1, MPI_COMM_WORLD, &status);  
   MPI_Recv(&rows, 1, MPI_INT, source, 1, MPI_COMM_WORLD, &status);  
   MPI_Recv(&a, rows*N, MPI_DOUBLE, source, 1, MPI_COMM_WORLD, &status);  
   MPI_Recv(&b, N*N, MPI_DOUBLE, source, 1, MPI_COMM_WORLD, &status);  
   /* Matrix multiplication */  
   for (k=0; k<N; k++)  
    for (i=0; i<rows; i++) {  
     c[i][k] = 0.0;  
     for (j=0; j<N; j++)  
      c[i][k] = c[i][k] + a[i][j] * b[j][k];  
    }  
   MPI_Send(&offset, 1, MPI_INT, 0, 2, MPI_COMM_WORLD);  
   MPI_Send(&rows, 1, MPI_INT, 0, 2, MPI_COMM_WORLD);  
   MPI_Send(&c, rows*N, MPI_DOUBLE, 0, 2, MPI_COMM_WORLD);  
  }   
  MPI_Finalize();  
 }   

How to compile



How to run



Special Thanks for Mr. K.P.M.K. Silva - BSc (Col), MSc(York) (Lecturer)

Saturday, June 2, 2012

Loop Optimization

  • Loop Interchange : Loops are reordered to minimize the stride and align the access pattern in the loop with the pattern of data storage in memory
Example: In C

for (I=1,4)
  for (J=1,4)
    a[J][I] = ...

after Interchanging

for (J=1,4)
  for (I=1,4)
     a[J][I] = ...
  • Loop Fusion : Adjacent or closely located loops fused into one single loop.
void nofusion()
{
  int i;
  for (i = 0; i<nodes;i++)
  {
     a[i] = a[i] * small;
     c[i] = (a[i]+b[i])*relaxn;
  }
  for (i = 1; i<nodes - 1;i++)
  {
     d[i] = c[i] - a[i];
  }
}

void fusion()
{
  int i;
  a[0] = a[0]*small;
  c[0] = (a[0]+b[0])*relaxn;
  a[nodes - 1] = a[nodes - 1] + b[nodes - 1]) * relaxn;
  for (i = 1; i < nodes - 1;i++)
  {
     a[i] = a[i] * small;
     c[i] = (a[i] + b[i]) * relaxn;
     d[i] = c[i] - a[i];
  }
}
  • Loop Fission : Split the original loop if it is worthwhile
void nofission() 
{
int i, a[100], b[100];
for (i = 0; i < 100; i++) 
{
  a[i]=1;
  b[i]=2;
}
}

void fission()
{
int i, a[100], b[100];
for (i = 0; i < 100; i++) 
{
  a[i]=1;
}
for (i = 0; i < 100; i++) 
{
  b[i] = 2;
}
}
  • Loop Peeling: Peel-off the edge iterations of the loop.
Before peeling:

for (i = 1; N; i++)
{
  if (i==1) x[i]=0;
  else
    if (i==N) x[i]=N;
    else
    x[i] = x[i] + y[i];
}

After peeling:

x[i]=0;
for (i = 2; i < N; i++)
x[i] = x[i] + y[i];
x[N] = N;
  • Loop Unrolling: Reduced the effect of branches
Before unrolling:

do i=1,N
y[i] = x[i]
enddo

After unrolling by a factor of four:

nend = 4*(N/4)
do i=1,N,4
y[i] = x[i]
y[i+1] = x[i+1]
y[i+2] = x[i+2]
y[i+3] = x[i+3]
enddo
do i = nend + 1, N
y[i] = x[i]
enddo

Wednesday, May 16, 2012

Install Contiki 2.5 on ubuntu 11.10

  • Contiki – a dynamic operating system for networked embedded systems
  • Loadable modules, multiple network stacks,
  • multiple threading models
  • Open source; 3-clause BSD licence
  • Small memory footprint
  • Designed for portability
Here I will explain you how to install it manually in Ubuntu 11.10. (This method works for any ubuntu distribute and any contiki distribute). You can use Instant contiki Instead of this method.


1. Install msp430 tool chain. Type this command in terminal

sudo apt-get install binutils-msp430 gcc-msp430 msp430-libc

2. Install the AVR tool chain. Type this command in terminal

sudo apt-get install gcc-avr binutils-avr gdb-avr avr-libc avrdude

3. Now you need JRE and JDK. Ubuntu is not providing sun java repositories now. So you can install openjdk as a substitute for it (Works well). Use this commands to install.

sudo apt-get install openjdk-7-jdk openjdk-7-jre

4. Then you need to setup JAVA_HOME and PATH environment variables. Refer this link to setup java path. Or if you need to install sun(oracle) java refer this link.

5. To run cooja simulator you will need Ant. To install it type this command in the terminal. 

sudo apt-get install ant

6. Then you need to download contiki source code. Use this sourceforge link to download it. You can select what version you like.

7. Ok now you have installed enough sufficient tools to run contiki. Unzip the contiki2.5.zip and change directory in to it. Then you need to go in to Examples/Hello-world. 

8. Type this in terminal. make TARGET=native hello-world. If no error occur type this in terminal. ./hello-world.native. If you get this out put, you just say hello to contiki world, congratulations. 


9. Now you need to check your program in cooja simulator. Run this commands. 

$cd contiki-2.5/tools/cooja
$ant run

Then cooja simulator will be opened. Follow the instructions on the website that I linked. 

10. Then you need to check your program in mspsim simulator. 
Here I used my own simple code to check mspsim.

$cd contiki-2.5/Examples/blink
make TARGET=sky blink.mspsim

Here is the screen shot of what I get


Ok. Now you correctly and manually installed contiki 2.5 in ubuntu.  

P.S
If you get an error on step 9 and 10 (this occurs when using msp430 micro controller chips,Avr do not have this issue) that is msp430 tool chain problem in ubuntu (sys/unistd.h missing). There is a naive method to solve it. Its very simple than change your operating system to ubuntu 11.04 or use instant contiki. You need to download this file. Then you need copy that file in to/usr/msp430/include/sys. 

To do it run this command. 
sudo cp unistd.h /usr/msp430/include/sys

Ok then your problem should be solved. 

Again I have to ask you, are you getting this kind of error? 
"/core/dev/ds2411.c:199: undifined reference to 'BV'."

Add these lines to following file. 

#define _BV(bitno)   1<<bitno   
#define BV(bitno) _BV(bitno)

contiki-2.5/platform/sky/contiki-conf.h 

Ok. I think you are now free to do any development using contiki.

Wednesday, May 2, 2012

ElGamal example over GF(11) field

 This very simple example of ElGamal with small field. It's highly recommended that use large fields.
  • Our group is GF(11) = {1,2,3,4,5,6,7,8,9,10}
  • Lets take n = 10 and  α = 2. Bob randomly select b = 5 then αb =25 = 10.Have public key (n,α,αb) = (10,2,10) and private key b = 5.
  • Alice chooses k = 7 and calculates αk = 27 = 7.
  • Alice looks up αb = 10 and encodes message as m = 3 then calculates           m(αb)k = 3 * 107 = 8.
  • Alice sends (αk,mαbk) = (7,8).
  • Bob Calculates mαbk((αk)b)-1 = 8(75)-1 = 3.
  • Thus Bob receives the message, 3, as sent by Alice
You need some idea about finite fields and modular arithmetic

Saturday, March 31, 2012

Hardware for Parallel Processing

1. Multi-tasking Single Processor Computer


Multiple processes can be run on a time-shared manner on a general single processor computer. Difficult to get a speed-up unless while one process is doing computing the other processes are involved input/output for example.


2.  Shared memory computers 


       2. 1 Multiprocessor Configurations (SMPs: Symmetric Multiprocessors)

SMP involves a multiprocessor computer hardware architecture where two or more identical processors are connected to a single shared memory and are controlled by a single OS instance.In an SMP multiple identical processors share memory connected via a bus. Bus contention prevents bus architectures from scaling. As a result, SMPs generally do not comprise more than 32 processors.

       2. 2 Hyperthreading (Simultaneous Multithreading [SMT])

With HT Technology, two threads can execute on the same single processor core simultaneously in parallel rather than context switching 1 between the threads. Scheduling two threads on the same physical processor core allows better use of the processor’s resources. HT Technology adds circuitry and functionality into a traditional processor to enable one physical processor to appear as two separate processors. Each processor is then referred to as a logical processor.

       2. 3 Dual Core

This term refers to integrated circuit (IC) chips that contain two complete physical computer processors (cores) in the same IC package. Typically, this means that two identical processors are manufactured so they reside side-by-side on the same die. It is also possible to (vertically) stack two separate processor die and place them in the same IC package. Each of the physical processor cores has its own resources
(architectural state, registers, execution units, etc.).

        2.4 Multi Core

The multi core system is an extension to the dual core system except that it would consist of more than 2 processors. The current trends in processor technology indicate that the number of processor cores in one IC chip will continue to increase. If we assume that the number of transistors per processor core remains relatively fixed, it is reasonable to assume that the number of processor cores could follow Moore’s Law, which states that the number of transistors per a certain area on the chip will double approximately every 18 months. Even if this trend does not follow Moore’s Law, the number of processor cores per chip appears destined to steadily increase - based on statements from several processor manufacturers. 

        2.5 Many Core

Many core is a multi-core processor in which the number of cores is large enough that traditional multi-processor techniques are no longer efficient largely because of issues with congestion in supplying instructions and data to the many processors. The many-core threshold is roughing in the range of several tens or hundreds of cores. 

        2.6 Graphics Processing Units (GPUs)

Graphics cards (often having 100+ processor cores) and a rich structure of memory that they can share is a good general purpose computing platform. Each processor can do less than your CPU, but with their powers combined they become a fast parallel computer.
  
3. Distributed Computing

A distributed computer is a distributed memory computer system in which the processing elements are connected by a network. Also known as a distributed memory multiprocessor or multi computer.  


        3.1 Cluster computing

A cluster is a group of loosely coupled computers that work together closely, so that in some respects they can be regarded as a single computer.Clusters are composed of multiple standalone machines connected by a network. While machines in a cluster do not have to be symmetric, load balancing is more difficult if they are not. The most common type of cluster is the Beowulf cluster, which is a cluster implemented on multiple identical commercial off-the-shelf computers connected with a TCP/IP Ethernet local area network.

        3.2 Massively parallel processing

A massively parallel processor (MPP) is a single computer with many networked processors. MPPs have many of the same characteristics as clusters, but MPPs have specialized interconnect networks (whereas clusters use commodity hardware for networking). MPPs also tend to be larger than clusters, typically having ”far more” than 100 processors. In an MPP, each CPU contains its own memory and copy of the operating system and application. Each subsystem communicates with the others via a high-speed interconnect.

         3.3 Grid computing

Grid computing is the most distributed form of parallel computing. It makes use of computers communicating over the Internet to work on a given problem. Because of the low bandwidth and extremely high latency available on the Internet, grid computing typically deals only with embarrassingly parallel problems. Most grid computing applications use middleware, software that sits between the operating system and the application to manage network resources and standardize the software interface. Often, grid computing software makes use of ”spare cycles”, performing computations at times when a computer is idling.


Note: Please do not make conflicts between these general terms with commercial terms. (i.e intel dual core, core 2 duo)