<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel rdf:about="https://ir.vidyasagar.ac.in/jspui/handle/123456789/397">
    <title>DSpace Community:</title>
    <link>https://ir.vidyasagar.ac.in/jspui/handle/123456789/397</link>
    <description />
    <items>
      <rdf:Seq>
        <rdf:li rdf:resource="https://ir.vidyasagar.ac.in/jspui/handle/123456789/6108" />
        <rdf:li rdf:resource="https://ir.vidyasagar.ac.in/jspui/handle/123456789/6106" />
        <rdf:li rdf:resource="https://ir.vidyasagar.ac.in/jspui/handle/123456789/5745" />
        <rdf:li rdf:resource="https://ir.vidyasagar.ac.in/jspui/handle/123456789/5583" />
      </rdf:Seq>
    </items>
    <dc:date>2026-04-27T20:56:19Z</dc:date>
  </channel>
  <item rdf:about="https://ir.vidyasagar.ac.in/jspui/handle/123456789/6108">
    <title>Prioritization of Multi-Sensor Tracked Data</title>
    <link>https://ir.vidyasagar.ac.in/jspui/handle/123456789/6108</link>
    <description>Title: Prioritization of Multi-Sensor Tracked Data
Authors: Kaity, Sourav
Abstract: Abstract In the real scenario, accurate tracking of moving objects is very much essential&#xD;
for surveillance, performance analysis of any airborne vehicles, detection&#xD;
of any inbound threat, engagement of anti threat equipment, detecting the origin&#xD;
of the enemy threat launch point etc. Tracking Radar system, Electro-Optical&#xD;
tracking systems and passive target tracking systems are well known for moving&#xD;
object tracking systems. All these tracking systems are widely used throughout&#xD;
the globe. The measurement accuracy of moving object location and reliability of&#xD;
the measured location are based on two critical factors. To achieve a more reliable&#xD;
result instead of one, multiple sensors are normally used. If all measurements are&#xD;
in agreement with each other then reliability increases. Some times it may happen&#xD;
that one or more sensors capture erroneous measurements. In such a scenario if&#xD;
sensors are identiﬁed and eliminated then reliability of measurement can be signiﬁcantly&#xD;
improved. Each sensor has its measurement accuracy level. To increase&#xD;
the measurement accuracy it is necessary to identify the error contributing factors&#xD;
of all sensors and impacts of the same. An efﬁcient data fusion algorithm can be&#xD;
applied to get accurate measurement. The time efﬁciency of the algorithm is also a&#xD;
prime concern as all the system measurements are used for real-time applications.&#xD;
We have focused our research on three different kinds of tracking system.&#xD;
These three tracking systems are namely Electro-Optical Tracking System (EOTS),Tracking Radar System and Passive Target Tracking System. Working principles&#xD;
of all these sensors are different. We have worked with all of these sensors and&#xD;
tried to ﬁnd out the best possible accuracy model for each. These models have&#xD;
signiﬁcantly improvd the accuracy and at the same time helped in calculating the&#xD;
error boundary. Another important contribution focuses on a real-time remote visualization&#xD;
system is to know the real-time update of the moving object location.&#xD;
In this research work, multiple EOS works together to produce the object&#xD;
location. In this each two EOS measurement combination can be used to compute&#xD;
object position. More than two numbers of sensors can produce a more reliable&#xD;
result. But in case, any one of the sensors is erroneous then the whole system&#xD;
becomes unreliable. Three different models viz “Prioritization and Elimination&#xD;
of erroneous sensor using perpendicular distance method”, “Improvement in the&#xD;
Accuracy of the Moving Object Position by Eliminating Erroneous Sensors using&#xD;
Clustering Approach”, “Multi Sensor Data Fusion Technique for Target Tracking&#xD;
Based on the Combination of Triangulation Method and K-means Algorithm” are&#xD;
established for identifying one or more erroneous sensors. And all these models&#xD;
are proved to successfully eliminate erroneous sensor(s) and produce accurate&#xD;
object location.&#xD;
In case of multiple Radar scenario, all radars measure the object location as&#xD;
per their accuracy. Our research work focused on the factors and their impacts on&#xD;
the Tracking Radar measurement accuracy to develop the model viz Analysis of&#xD;
Factors and their Impacts in Measurement Accuracy and Prioritisation of Radars.&#xD;
The model quantify the measurement accuracy. To improve the accuracy we have&#xD;
established a model viz Multiple Radar Data Fusion to Improve the Accuracy in&#xD;
Position Measurement Based on Clustering Algorithm. At ﬁrst it identiﬁes presence&#xD;
of any erroneous measurement. After elimination, an efﬁcient data fusion technique have been applied to produce accurate position measurement.&#xD;
A passive target tracking system is a combination of at least four numbers&#xD;
of time synchoronised receivers. Here we have established a “time difference of&#xD;
arrival” algorithm to ﬁnd out the object location based on the time difference of&#xD;
arrival of the electro-magnetic signals from the target. Accuracy of the object position&#xD;
measurement depends upon the geographical location and time sync accuracy&#xD;
of the receivers. A model “Prioritization of Receivers for Minimum Possible&#xD;
Error Boundary in Time Difference of Arrival Algorithm” is established to ﬁnd&#xD;
out the best possible combination of four receivers, in case there are more than&#xD;
four receivers. This model also ﬁnds the error boundary of the measurement and&#xD;
co-relation between error factor and the range of the moving object.&#xD;
In the research work different techniques are adopted and improved for ﬁnding&#xD;
the erroneous sensor based on the unique error contributing factors of all three&#xD;
kinds of sensors. The Electro-Optical Tracking System, Tracking Radar System&#xD;
and Passive Target Tracking System are prioritized based on multiple critical criteria&#xD;
so that the best sensor can be used for data fusion and the most accurate result&#xD;
can be achieved. Different time efﬁcient clustering algorithm is deﬁned based on&#xD;
the tracking principle of each kind of tracking systems. And eash case clustering&#xD;
algorithm is efﬁciently implemented for eliminating erroneous sensors as well as&#xD;
grouping the best sensors for improvement in measurement. A numbers of experiments&#xD;
were carried out in this research work for all three kinds of tracking&#xD;
sensors to establish all the algorithms. The results obtained in all the experiments&#xD;
were satisfactory. Real-time remote visualization of the measured parameters is&#xD;
also an important task for monitoring and the same is analysed and discussed in&#xD;
the thesis in detail. Overall all these techniques and systems performances were&#xD;
tested rigorously with simulation to produce reliable accurate results in real-time.</description>
    <dc:date>2021-07-26T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://ir.vidyasagar.ac.in/jspui/handle/123456789/6106">
    <title>Design of an effective Congestion Control Routing Protocol for Mobile-Ad-Hoc Network</title>
    <link>https://ir.vidyasagar.ac.in/jspui/handle/123456789/6106</link>
    <description>Title: Design of an effective Congestion Control Routing Protocol for Mobile-Ad-Hoc Network
Authors: Singha, Soamdeep
Abstract: A Mobile Ad hoc Networks (MANETs) is an infrastructure-less self-con guring network&#xD;
in that nodes themselves create and manage the network in a self-organized manner.&#xD;
Mobile Ad hoc networks play an important role in the deployment of future wireless&#xD;
communication systems. MANET in todays world  nds its use in disaster management,&#xD;
military applications and other emergency operations. MANET demands great&#xD;
performance requirements in recent years due to the increased use of streaming multimedia&#xD;
applications. To meet these requirements, the existing routing protocols should&#xD;
provide data transfer with minimal delay, packet loss and jitter in a bandwidth restricted&#xD;
environment. A MANET inherently depends on the routing scheme employed&#xD;
to provide expected Quality of Service (QoS). Many congestion control routing protocols&#xD;
have been developed in the past to address these issues such as Dynamic Source&#xD;
Routing (DSR), Ad-hoc on-Demand Distance Vector (AODV), Zone Routing Protocol&#xD;
(ZRP) and Temporally Ordered Routing Algorithm (TORA). However, the capability&#xD;
of this traditional protocol to support streaming multimedia applications is limited. In&#xD;
the present investigation, we have proposed different approaches of Random Early Detection&#xD;
(RED) through queue management to design of an effective congestion control&#xD;
routing protocol for MANET. RED is a powerful mechanism for controlling traffic. It&#xD;
can provide better network utilization than Drop-Tail if properly used, but can induce&#xD;
network instability and major tra c disruption if not properly con gured. RED con guration&#xD;
has been a problem since its  rst proposal, and many have tried to address&#xD;
this topic. Unfortunately, most of the studies propose RED con gurations (optimal&#xD;
sets of RED parameters) based on heuristics and simulations, and not on a systematic&#xD;
approach. Their common problem is that each proposed con guration is only good&#xD;
for the particular tra c conditions studied, but may have detrimental e ects if used&#xD;
in other conditions. In this study, we propose a general method for con guring RED&#xD;
congestion control modules, based on a model of active queue management (AQM).&#xD;
In this dissertation, six new congestion control models Model-1: Application of Dynamic Weight with Distance to Improve the Performance of RED (ADWD-REDIP),&#xD;
Model-2: Active Queue Management in RED to Reduce Packet Loss (AQMRED-RPL),&#xD;
Model-3: A Predictable Active Queue Management to Reduce Sensitivity&#xD;
of RED Parameter (PAQM-RS-RED), Model-4: An Innovative Active Queue Management&#xD;
Model Through Threshold Adjustment Using Queue Size (IAQM-TA-QZ),&#xD;
Model-5: A Nobel Congestion Control Algorithm Using Bu er Occupancy RED (CCABO-RED),&#xD;
Model-6: Active Queue Management in RED considering Critical Point on&#xD;
Target Queue (AQM-RED-CPTQ) have been introduced to improved the performance&#xD;
in RED. The Model-1 (ADWD-RED-IP) is proposed where the dynamic weight parameter D&#xD;
q is presented with a probability of P to increase the RED efficiency. Next&#xD;
the Model-2(AQM-RED-RPL) is designed where less packet drop achieved by making&#xD;
many re nements and monitoring both the average queue size and the immediate&#xD;
queue size of the packet dropping function. After that Model-3 (PAQM-RS-RED)&#xD;
has been suggested which can also be incorporated as a clear demonstration in under&#xD;
RED routers, eliminates the sensitivity to variables that in&#xD;
uence the functioning of&#xD;
RED and in a broad range of tra c situations can reach a clearly de ned target average&#xD;
queue length reliable. Then, Model -4 (IAQM-TA-QZ) provides an algorithm&#xD;
that adapts the threshold parameters and probability of packet drop as per the load&#xD;
condition of tra c. The next Model-5 (CCA-BO-RED) which measure the rate of&#xD;
occupancy of the queue and treat it as a congestion parameter that will be predicted&#xD;
when the queue is crowded. This method is used to modify RED variables dynamic.&#xD;
Finally, we have proposed Model-6 AQM-RED-CPTQ). In order to provide greater&#xD;
congestion management over the network while also preserving the value of RED, it&#xD;
works to enhance these criteria. This model will introduce Critical Point on Target&#xD;
Queue and some traits of RED and its variations.&#xD;
q&#xD;
This research analyzes the performance of the proposed congestion control Ad hoc&#xD;
routing protocols such as Random Early Detection (RED) and Variation of RED using Network Simulator Version no. 2 (NS- 2). The simulation is carried out with 100 nodes.&#xD;
Network tra c scenarios one with 10 connections and other with 20 connections are&#xD;
considered. The simulation area is 400 x 400 and 600 x 1000 meters and the mobility&#xD;
speeds  xed are 10 m/s and 20 m/s. The performance of the above routing protocols&#xD;
was analyzed in Random Waypoint, Random Walk and Random Direction Mobility&#xD;
Models. The packet delivery ratio and the end-to-end delay for varying number of&#xD;
sources has been evaluated with respect to the parameters such as node speed, Network&#xD;
Tra c and Node Density. The comparative study pointed out the relative strengths&#xD;
and weakness of those congestion control Ad hoc routing protocols.&#xD;
In the present research, various methodologies have been introduced to improve the&#xD;
existing routing schemes for congestion control with the help of active queue management.&#xD;
We have compared our proposed schemes with the some of the popular existing&#xD;
scheme like RED, ERED, SRED, REM, BLUE, LDC, and FREED. It has been observed&#xD;
that End to End delay, Packet Delivery Ratio, the number of packet drop count&#xD;
is calculated and shown better than other existing schemes.</description>
    <dc:date>2021-07-26T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://ir.vidyasagar.ac.in/jspui/handle/123456789/5745">
    <title>Design and Analysis of Image Steganographic Protocol</title>
    <link>https://ir.vidyasagar.ac.in/jspui/handle/123456789/5745</link>
    <description>Title: Design and Analysis of Image Steganographic Protocol
Authors: Chowdhuri, Partha
Abstract: In today's Internet era, secure data communication is vital and indispensable. Image&#xD;
steganography is one of the most popular and widely used techniques to protect valuable&#xD;
information from illegitimate access. The quality of the stego image obtained from&#xD;
any steganographic scheme is inversely proportional to its data hiding capacity. This&#xD;
poses a challenge for the prospective researcher to balance a good trade-o  among the&#xD;
quality of stego image, embedding capacity and robustness. Moreover, it is not only the&#xD;
extraction of the secret message from stego image but, the reconstruction of the original&#xD;
image from stego is also of paramount importance for many human centric applications&#xD;
such as tactical communication, health care, e-governance, commercial security, and&#xD;
intellectual property rights etc. In the last two decades, researchers around the globe&#xD;
have tried to resolve these problems to some extent but have not achieved a signi cant&#xD;
level of success. In order to overcome these issues, some new image steganographic&#xD;
schemes have been designed in spatial domain. These schemes maintain a good balance&#xD;
between stego image quality, embedding capacity and robustness.&#xD;
Two single image based steganographic schemes have been designed and implemented&#xD;
using graph neighbourhood, and pixel value difference. These schemes produce good&#xD;
quality stego image along with high embedding capacity. To increase the embedding&#xD;
capacity, robustness and to achieve reversibility, some dual image based steganographic&#xD;
schemes have been designed using graph neighbourhood and weighted matrix. In these&#xD;
schemes, the use of dual image and image interpolation techniques help to increase the&#xD;
data hiding capacity, improve visual quality and enhance the security.&#xD;
To strengthen the robustness under compressed environment, some novel steganographic&#xD;
schemes have been developed in transform domain using Discrete Cosine Transform&#xD;
and Discrete Wavelet Transform. To encounter the extent of distortion to the&#xD;
coefficient of transform domain, a weighted matrix is introduced to maintain good&#xD;
trade-o  between quality and robustness. Further, some standard steganalysis techniques have been used to examine the proposed methods and tested under some steganographic attacks to analyze the robustness&#xD;
of the schemes because designing a new scheme is not enough, rather the analysis of&#xD;
its impact in terms of security and robustness is very much important that would&#xD;
determine whether it can be advocated globally or not.</description>
    <dc:date>2021-02-08T00:00:00Z</dc:date>
  </item>
  <item rdf:about="https://ir.vidyasagar.ac.in/jspui/handle/123456789/5583">
    <title>Techniques for DNA sequences compression and encryption</title>
    <link>https://ir.vidyasagar.ac.in/jspui/handle/123456789/5583</link>
    <description>Title: Techniques for DNA sequences compression and encryption
Authors: Hossein, Syed Mahamud
Abstract: The purpose of this research is how to get lossless compression encryption in millisecond of completion. Some notable research challenges are about storing, transferring and safety of deoxyribonucleic Acid (DNA) order. Although pattern matching for text compression has been observed for a few years and many publications are available in literatures, there is still space to enhance the effectiveness in terms of both compression &amp; encryption. Human beings are always fond of acquiring more and more information in least possible time and space. Nowadays sending (power and so on) of DNA/RNA/protein order especially over wireless network is very common.&#xD;
The DNA database size increases greatly, changing from millions to billions annually. Therefore, for storing, searching the DNA database needed a systematic lossless compression and encryption algorithm for safe transmission. In the field of Bioinformatics the storing and transmission of DNA is very important with respect to compression rate, ratio and encryption point of view. The DNA order needs greater space for storing &amp; more time for encryption causing much loss of time in sending  of information.&#xD;
The recurrence of the DNA short pattern are the highest qualities in biological  orders. The offered compression algorithm will be based on combinations of REPEAT, REVERSE, GENETIC PALINDROME &amp; PALINDROME. Another offered compression &amp; selection encryption algorithm have modified Huffman’s and RSA algorithm. This algorithm is based on searching exact repetitions, substring substitution by corresponding ASCII code and producing library file, as an outcome there is cumulating of data facts. In this method the data is safe by using the ASCII code for information interchange value and producing the library file which act as a signature. The Huffman's algorithm is used on the output in the first stage of the repetition method and also including the change of Huffman’s tree level &amp; node position for encryption.&#xD;
It can give the safety of facts, the sudden coded value is essential for decoding by using only certain coded value allotted by encoded time. This offered method safeguards the sequence by applying ASCII symbol, it is user friendly. This type of security is provided in tier one. In tier two selective encryption techniques are used for higher quality safety.&#xD;
Form the information point of view the most demanding question nowadays is about the safety of information during transmission. The selection encryption process seems to provide security and this technique is applied on compressed data or in the library file or in both. . The fractional part of a message is encrypted in the selective encryption method, leaving the remaining part unchanged. This is top most important with respect to the selective encryption system. The offered selection encryption makes smaller the computational alternatives of this data. The safety of this is ensured by signature that depends on ASCII code &amp; progressive library file acting as a key. As an outcome of that systematize lossless compression technique, data structure, to store effectively, access secure communication and search  the greatly sized data sets are essential. These days DNA/RNA sequence with a complex structure that stores facts of different types at the same time are in common use. The operating time is very less and it depends on the input file size. The assessment of encryption system depends on its rate of motion and levels of safety it gives. The operating time of this algorithm is minimum, needed minute memory and can be facilely used. The mass request is for the need of minimum place for storing and low computational price, so, systematized algorithm is needed for compression encryption.&#xD;
The compression minimizes the file size and encryption makes certain the safety of a particular file which is to be sent over some uncertain network like the internet. In this age of information sharing and transferring of data have increased to a great extent. Generally the information exchange is done using open narrow ways by making it unsafe to interception. On the other hand, effective information retrieval is needed to quickly discover the relevant information from this huge mass of facts using ready to be used materials.&#xD;
For that purpose make greater, stronger, more complete six compression algorithms for making shorter greatly sized collections of DNA orders and two selection encryption of  modified Huffman’s &amp; RSA are presented. When a user searches for any order for an organism, an encrypted compressed order DNA sequence can be sent from source to user. The encrypt compressed the DNA sequences then can be decrypted &amp; decompressed at the client end producing lower transmission time over the internet.&#xD;
The experimental results show that our compression-encryption algorithm is in competition with the best algorithms and is almost the fastest among all views when the number of pattern is not very greatly sized. As an outcome of that, this algorithm is desired for general string matching applications. These data structures and algorithms can be used in several situations and experimentally show that they can successfully make an attempt to be placed over with other techniques commonly used in those fields (of knowledge). This work, therefore, is greatly economical and has market potential &#xD;
This algorithm also experiments on benchmark data and equivalent length of artificial sequences. By applying modified Huffman’s technique the rate &amp; ratio is lowered. It also makes a comparison of the compression technique with published results and selection encryption with RSA algorithms.</description>
    <dc:date>2020-10-12T00:00:00Z</dc:date>
  </item>
</rdf:RDF>

