Parallel simulated annealing for the vehicle routing problem with time windows. A parallel simulated annealing algorithm to solve the vehicle routing problem with time windows (VRPTW) is investigated. The objective is to find the best possible solutions to some well-known instances of the problem by using parallelism. Another objective is to speed up sequential annealing by making use of a number of co-operating processes with no loss of quality of solutions. The quality of a solution is meant as its proximity to the optimum (or best known) solution. The empirical evidence indicate that parallel simulated annealing can be applied with success to bicriterion optimization problems [VRPTW best solutions,C06a,C06,CW06,CW05,CCG03,CC02].
Parallel simulated annealing for the set-partitioning problem. A delivery problem which reduces to the NP-complete set-partitioning problem is investigated. The sequential and parallel simulated annealing algorithms to solve the delivery problem are discussed. The objective is to improve the quality of solutions to the problem by applying parallelism [C99,C00,C01].
Randomized PRAM simulation. The parallel random access machine (PRAM) is the most commonly used general-purpose machine model for describing parallel computations. Unfortunately the PRAM model is not physically realizable, since on large machines a parallel shared memory access can only be accomplished at the cost of a significant time dalay. A number of PRAM simulation algorithms have been presented in the literature. The algorithms allow execution of PRAM programs on more realistic parallel machines. In this project we study the randomize simulation of an EREW (exclusive read, exclusive write) PRAM on a module parallel computer (MPC). The simulation is based on utilizing universal hashing. The experiments are conducted on the MPC built upon IMS T9000 transputers [CM98,CM96].
Parallel searching for the first solution. In this project a parallel algorithm for conducting a search for a first solution to the problem of generating minimal perfect hash functions was studied. A message-based distributed memory computer is assumed as a model of parallel computations. A data structure, called reversed trie (r-trie), was devised to carry out the search. The algorithm was implemented on transputer networks in occam and Parallel C languages. The experiments showed that the algorithm exhibits a consistent and almost linear speedup. The r-trie structure proved to be highly memory efficient [BCK94] [BCK93,KCB93].
Finding a fundamental-cycle set of a graph in parallel. An NP-complete problem of finding a fundamental_cycle set of a graph G with minimum total length was considered. Two parallel algorithms of O(n2 / p + n log n log p) and O(m + n2 / p + n log(n / p) + n log p) costs to find a suboptimal solution to this problem was studied (p is a number of processors, n is a number of vertices, and m is a number of edges of G). The algorithms partition an edge and vertex set of G among processors, respectively, and use a new heuristic method to solve the problem. A message-based tree-connected MIMD computer is assumed as a model of parallel computations. The algorithms were implemented for a binary tree of 15 transputers, and the experiments were conducted on a wide range of random graphs. The results show that the vertex set partition algorithm with inferior theoretical cost gives better speedups and finds the fundamental-cycle sets of shorter total lengths as compared to the edge set partition algorithm [CKM93,CKM93a].
Solving approximation problems by ant colony programming. A method of automatic programming, called genetic programming, assumes that the desired program is found by using a genetic algorithm. We propose an idea of ant colony programming in which instead of a genetic algorithm an ant colony algorithm is applied to search for the program. The test results demonstrate that the proposed idea can be used with success to solve the approximation problems [BC02].
Selection schemes in evolutionary algorithms. One of the steps of an evolutionary algorithm is selection which chooses some individuals from a current population as parents to the individuals of the next population. This work focuses on loss of population diversity defined as the proportion of population individuals which are not chosen during selection. Quantifying loss of population diversity is crucial for designing evolutionary algorithms in which the search process is directed through controlling population diversity via parameters of a selection scheme. The aim of this work is to derive closed, approximate formulas which enable to determine loss of population diversity for some selection schemes. The selection schemes under consideration are also compared with respect to effectiveness and ease of controlling population diversity [WC02].
Grammars in genetic programming. The work consists of two parts. In the first part the idea of genetic programming is presented and the basic elements of a genetic programming system are described. In the second part, considering a selected example, we describe the results of investigations of the influence of program grammars on the efficiency of genetic programming [WC99].
Heuristic algorithms for solving the set-partitioning problem. A delivery problem (DP) which arose in a transportation company is investigated. It can be formulated as follows. There is a central depot of goods and n customers (nodes) located at the specified distances from the depot. The goods have to be delivered to each customer by using vehicles. The number of vehicles which can be used for delivery is unlimited, however in each trip which is effected during an eight-hour day a vehicle crew can visit at most k customers, where k is a small constant. Let k=3. Then on a single trip the crew starts from the depot, visits one, two or three customers and returns to the depot. A set of routes which guarantees the delivery of goods to all customers is sought. Furthermore, the cost defined as the total length of the routes in the set should be minimized. It is easy to see that if a strong constraint on the magnitude of k is imposed (e.g. k=3) then the delivery problem reduces to the set partitioning problem (SPP) which is NP-complete. In the project the 3-opt, simulated annealing, tabu search and ant algorithms to solve the delivery problem are investigated. The objective was to select the algorithm which seeks good, i.e. near-optimal, solutions to the DP within a reasonable computing time [C98].
Quasi-perfect hashing. The idea of quasi-perfect hashing is introduced and applied to solve the static dictionary problem. Given a universe U and a set S of n distinct keys belonging to U, we propose a quasi-perfect hash function which allows to find a key from S, stored in the hash table of size m, m>=n, in O(1) time. While looking up a key at most two probes in the hash table are made. Our main motivation is to minimize the memory requirement for representing the hashing scheme, retaining a high probability of finding quasi-perfect hash functions for arbitrary sets S. If we compare the method of quasi-perfect hashing to Fredman, Komlos and Szemeredi's two-level hashing for the bounded universe U, we find that it is superior from the standpoint of both space and speed [C98a].
Minimal perfect hashing Let S be a set of n distinct keys belonging to some universe U. We would like to store the keys of S so that membership queries of the form "Is x in S?" can be answered quickly. This searching problem, also called the dictionary problem, is ubiquitous in computer science applications. Various algorithms to solve the dictionary problem have been devised. One of them is to compute a function h(x) which determines the location of a key in a table. This approach has led to a class of very efficient searching methods commonly known as hashing or scatter storage techniques. If a set of keys is static, then it is possible to compute a function h(x) which enables us to find any key in the table in just one probe. Such a function is said to be a perfect hash function. A perfect hash function which allows us to store a set of records in a minimum amount of memory, i.e. in a table of the size equal to the number of keys times the key size, is called a minimal perfect hash function. Minimal perfect hash functions have a wide range of applications. They are used for memory efficient storage and fast retrieval of items from static sets, such as reserved words in programming languages, command names in interactive systems, or commonly used words in natural languages. Therefore there is application for minimal perfect hash functions in compilers, operating systems, language translation systems, hypertext, hypermedia, file managers, and information retrieval systems. In the project we study various kinds of (mostly probabilistic) algorithms for generating minimal perfect hash functions for large sets (up to 100000 keys) [CHM96,MWHC96] [C95] [CM93,CM93a,MWHC93] [CHM92,CHM92a,CHM92b,CM92,MWCH92] [CHM91].