ELEC 301 Projects Fall 2005 by Danny Blanco, et al - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

index-1_1.jpg

ELEC 301 Projects Fall 2005

Collection edited by: Richard Baraniuk and Rice University ELEC 301

Content authors: Danny Blanco, Elliot Ng, Charlie Ice, Bryan Grandy, Sara Joiner, Austin

Bratton, Ray Hwong, Jeanne Guillory, Richard Hall, Jared Flatow, Siddharth Gupta, Veena

Padmanabhan, Grant Lee, Heather Johnston, Deborah Miller, Warren Scott, _ _, Chris

Lamontagne, Bryce Luna, David Newell, William Howison, Patrick Kruse, Kyle Ringgenberg,

Michael Lawrence, Yi-Chieh Wu, Scott Novich, Andrea Trevino, and Phil Repicky

Online: < http://cnx.org/content/col10380/1.3> This selection and arrangement of content as a collection is copyrighted by Richard Baraniuk and Rice University ELEC 301.

It is licensed under the Creative Commons Attribution License: http://creativecommons.org/licenses/by/2.0/

Collection structure revised: 2007/09/25

For copyright and attribution information for the modules contained in this collection, see the " Attributions" section at the end of the collection.

ELEC 301 Projects Fall 2005

Table of Contents

Chapter 1. Steganography - What's In Your Picture

1.1. Abstract and History

Abstract and History

Abstract

A Brief History of Steganography

1.2. Compression Framework

Compression

Compression Framework

1.3. Compression - Dropping the DCT Coefficients

Compression Algorithm

Dropping DCT Coefficients

1.4. Compression - Zeros Grouping

Compression Algorithm

Zeros Grouping

1.5. Zeros Hiding Method

Data Hiding Methods

Zero Hiding

Hiding Information

Data Retrieval

1.6. Bit-O-Steg Method - Background

Data Hiding Methods

Bit-O-Steg

Previous Work and Background

1.7. Bit-O-Steg Hiding

Data Hiding Methods

Bit-O-Steg

Hiding Information

Retrieving the Data

1.8. Importance of Steganalysis

Steganalysis

Importance of Steganalysis

1.9. Steganalysis - Zeros Hiding Detection

Steganalysis

Zeros Hiding Detection

1.10. Steganalysis - Bit-O-Steg Detection

Steganalysis

Bit-o-steg detection

1.11. Future Considerations and Conclusions

Future Considerations and Conclusion

Future Work

Conclusion

1.12. Works Cited

Works Cited

1.13. Steganography Matlab Code

1.14. Group Members

Group Bio

Chapter 2. Investigation of Delay and Sum Beamforming Using a Two-Dimensional Array

2.1. Delay and Sum Beamforming with a 2D Array: Introduction

1. Introduction

2. Delay and Sum Beamforming

2.1 Nearfield Processing

2.2 Farfield Processing

3. Complications

3.1 Time Quantization

3.2 Aliasing, Resolution, and Sampling Frequency

3.3 Unknown Source Location

2.2. Hardware Setup for Perimeter Array of Microphones

Hardware Setup for Perimeter Array of Microphones

Choosing Microphones

Data Acquisition

PreAmps

Putting it all Together

2.3. Labview Implementation of 2D Array Delay and Sum Beamformer

Introduction

Waveform Generation VI

Upsampling VI

Delay Generation VI

Main Analysis VI

Labview Code

2.4. Results of the Testing of 2D Array Beamformer

2.5. Delay and Sum Beamforming with a 2D Array: Conclusions

Summary of Results of Data

Limitations of Hardware and Computing Power

Possible Extensions

2.6. Expressing Appreciation for the Assistance of Others

Chapter 3. Seeing Using Sounds

3.1. Introduction and Background for Seeing with Sound

Introduction

Background and Problems

3.2. Seeing using Sound - Design Overview

Input Filtering

The Mapping Process

3.3. Canny Edge Detection

Introduction to Edge Detection

Canny Edge Detection and Seeing Using Sound

3.4. Seeing using Sound's Mapping Algorithm

Vertical Mapping

Horizontal Mapping

Color Mapping

3.5. Demonstrations of Seeing using Sound

Examples

3.6. Final Remarks on Seeing using Sound

Future Considerations and Conclusions

Contact Information of Group Members

Chapter 4. Intelligent Motion Detection Using Compressed Sensing

4.1. Intelligent Motion Detection and Compressed Sensing

New Camera Technology with New Challenges

4.2. Compressed Sensing

4.3. Feature Extraction from CS Data

Can Random Noise Yield Specific Information?

Simplicity for Low Power

Investigation Goals

4.4. Methodology for Extracting Information from "Random" Measurements

Simulating Compressed Sensing

Random, On Average

Resolution Limit

4.5. Idealized Data for Motion Detection

Making Frames

Making Movies

4.6. Speed Calculation: the Details

Extensibility

Average Absolute Change to Measure Speed

Average Squared Change to Measure Speed

4.7. Ability to Detect Speed: Results

Calculations Performed on Each Movie Clip

Velocity Trends

4.8. Concluding Remarks for CS Motion Detection

4.9. Future Work in CS Motion Detection

4.10. Support Vector Machines

4.11. The Team and Acknowledgements

The Team

Acknowledgements

Chapter 5. Terahertz Ray Reflection Computerized Tomography

5.1. Introduction-Experimental Setup

T-rays: appropriateness for imaging applications

Experimental setup that provided the data used in this project

Two main steps for imaging the test object: I) Deconvolution, II) Reconstruction

5.2. Description/Manipulation of Data

5.3. Deconvolution with Inverse and Weiner Filters

Problem Statement

Inverse Filter

Wiener Filter

5.4. Results of Deconvolution

5.5. Reconstruction

Theory of Filtered Backprojection Algorithm (FBP)

5.6. Backprojection Implementation

ShrinkPR

Filtersinc

Backproject

Representative Results

5.7. Conclusions and References

Conclusions

Future Work

References

5.8. Team Incredible

Chapter 6. Filtering and Analysis of Heart Rhythms

6.1. Introduction to Electrocardiogram Signals

Abstract

6.2. Medical Background

6.3. Block Diagram/Method

6.4. Sample Outputs

6.5. Overall Results and Conclusions

6.6. MATLAB Analysis Code

6.7. Group Members

Chapter 7. Naive Room Response Deconvolution

7.1. Introduction to Naive Acoustic Deconvolution

7.2. Naive Deconvolution Theory

7.3. Recording the Impulse Response of a Room

7.4. The Effectiveness of Naive Audio Deconvolution in a Room

7.5. Problems and Future Considerations in Naive Room Response Deconvolution

7.6. Authors' Contact Information

7.7. Room Response Deconvolution M-Files

Chapter 8. Musical Instrument Recognition

8.1. Introduction

Introduction

8.2. Simple Music Theory as it relates to Signal Processing

Simple Music Theory

Harmonics

Duration and Volume

8.3. Common Music Terms

8.4. Matched Filter Based Detection

Shortcomings of the Matched Filter

8.5. System Overview

8.6. Pitch Detection

Pitch Detection

8.7. Sinusoidal Harmonic Modeling

Sinusoid Harmonic Modeling

8.8. Audio Features

Definitions

How We Chose Features

References

8.9. Problems in Polyphonic Detection

8.10. Experimental Data and Results

Experimental Data

Training

Testing

Results

Self-Validation

Monophonic Recordings

Polyphonic Recordings

8.11. Gaussian Mixture Model

Gaussian Mixture Model

Recognizing Spectral Patterns

References

8.12. Future Work in Musical Recognition

Improving the Gaussian Mixture Model

Improving training data

Increasing the scope

Improving Pitch Detection

8.13. Acknowledgements and Inquiries

8.14. Patrick Kruse

Patrick Alan Kruse

8.15. Kyle Ringgenberg

Kyle Martin Ringgenberg

8.16. Yi-Chieh Jessica Wu

Chapter 9. Accent Classification using Neural Networks

9.1. Introduction to Accent Classification with Neural Networks

Overview

Goals

Design Choices

Applications

9.2. Formants and Phonetics

Sample Spectograms

9.3. Collection of Samples

Choosing the sample set

9.4. Extracting Formants from Vowel Samples

9.5. Neural Network Design

9.6. Neural Network-based Accent Classification Results

Results

Test 1: Chinese Subject

Test 2: Iranian Subject

Test 3: Chinese Subject

Test 4: Chinese Subject

Test 5: American Subject (Hybrid of Regions)

Test 6: Russian Subject

Test 7: Russian Subject

Test 8: Cantonese Subject

Test 1: Korean Subject

9.7. Conclusions and References

Conclusions

Acknowledgements

References

Index

Chapter 1. Steganography - What's In Your Picture

1.1. Abstract and History*

Abstract and History

Abstract

For years, people have devised different techniques for encrypting data while others have

attempted to break these encrypted codes. For our project we decided to put our wealth of DSP

knowledge to use in the art of steganography. Steganography is a technique that allows one to hide

binary data within an image while adding few noticeable changes. Technological advancements

over the past decade or so have brought terms like “mp3,” “jpeg,” and “mpeg” into our everyday

vocabulary. These lossy compression techniques lend themselves perfectly for hiding data. We

have chosen this project because it gives a chance to study several various aspects of DSP. First,

we devised our own compression technique which we loosely based off jpeg. There have been

many steganographic techniques created so far, which compelled us to create two of our own

strategies for hiding data in the images we compress. Our first method, zero hiding, adds the

binary data into the DCT coefficients dropped in compression. Our other method, which we called

bit-o-steg, uses a key to change the values of coefficients that remain after compression. Finally,

we had to find ways to analyze the success of our data hiding strategies, so through our research

we found both DSP and statistical methods to qualitatively measure our work.

A Brief History of Steganography

Steganography, or “hidden writing” can be traced back to 440 BC in ancient Greece. Often they

would write a message on a wooden panel, cover it in wax, and then write a message on the wax.

These wax tablets were already used as writing utensils, so the hiding of a message in a commonly

used device draws very little suspicion. In addition to use by the Greeks, the practice of

steganography was utilized by spies in World War II. There were even rumors that terrorists made

use of steganography early in 2001 to plan the attacks of September 11

1.2. Compression Framework*

index-10_1.jpg

Compression

Compression Framework

There are many picture file formats to save images to, however much of the research in

steganography is done using the JPEG format. JPEG is a very common and uses a relatively

straightforward compression algorithm. Although there are several JPEG compression scripts

written for MATLAB, customizing them for our purposes and getting the output to work with the

JPEG format would have shifted the focus of our project from steganography to implementing

JPEG compression. Thus we decided to implement our own custom image framework that would

be similar to JPEG but much more straightforward.

1.3. Compression - Dropping the DCT Coefficients*

Compression Algorithm

Dropping DCT Coefficients

Our framework and JPEG are both based around the discrete cosine transform. Just like with

sound, certain frequencies in an image are more noticeable than others, so taking them out of the

image doesn’t change the image much. We used the 2D discrete cosine transform (DCT) as seen

in equation 1 to take an image and converts it into the frequencies that make up the image, in

other words it takes us into the frequency domain.

(1.1)

There are several transforms that could have been utilized to get the image into the frequency

domain. The DCT, however, is a purely real transform. Thus, manipulating the frequencies is

much more straightforward compared to other transforms. From here we could take the DCT of

the entire image and then throw away frequencies that are less noticeable. Unfortunately this

would make the image blurry and cause the image to lose edges. To solve this problem the image

is divided into 8x8 blocks, to preserve the integrity of the image. To drop insignificant

frequencies, JPEG compression utilizes a quantization matrix. We simplified this process by using

a threshold value and dropping frequencies below the threshold. Thus our compression algorithm

models the basic functionality of the JPEG standard.

Figure 1.1.

The result of taking the DCT. The numbers in red are the coefficients that fall below the specified threshold of 10.

1.4. Compression - Zeros Grouping*

Compression Algorithm

Zeros Grouping

The second part to our image framework is zeros grouping. Just like the JPEG standard, the

algorithm utilizes a zig-zag pattern that goes through each DCT matrix and creates a 64-length

vector for each matrix. The advantage of the zig-zag pattern is that it groups the resulting vector

from low frequencies to high frequencies. Groups of zeros are then replaced with an ASCII

character representing how many zeros are represented within that group.

Figure 1.2.

Zig-zag method traverses the matrix and vectorizes the matrix. After grouping zeros the resulting bitstream is sent to a file.

With this simple framework in place, we are able to model a real world image compression

algorithm and focus on implementing steganography.

1.5. Zeros Hiding Method*

Data Hiding Methods

Zero Hiding

Hiding Information

We arrived at our first data hiding method, which we called “zero hiding,” quite intuitively. If you

recall, our compression algorithm removed the least important DCT coefficients. It follows, then,

that we could put the bit stream we wish to hide back into these dropped coefficients without

changing the image drastically. To do this though, there must be a way to distinguish a zero which

resulted from a dropped coefficient and a coefficient that is zero. To do this, we ran the image

through a modified compressor that, instead of dropping coefficients below the specified

threshold, replaced them with either a plus or minus one, depending on the sign of the coefficient.

Figure 1.3.

The DCT is taken and then each coefficient under the specified threshold (10) will be dropped. These are coefficients are shown in blue in the picture on the right.

Next the hiding algorithm is given a binary data stream and the threshold value. The data stream is

then divided up into words. However, the maximum decimal value of the word must be less than

the threshold, since values over the threshold signify an important coefficient in the picture. We

then increment each word’s decimal value by one to avoid putting in zero valued coefficients,

which would otherwise be indistinguishable from zero valued coefficients in the original image.

We then go back to the original coefficients matrix and replace the ones with the new value of the

data word, maintaining the sign throughout.

Figure 1.4.

The dropped coefficients are replaced with words created from the data stream. The IDCT is then taken, transforming the coefficient matrix back to a picture matrix.

Data Retrieval

To recover the hidden data the recovery script is given the threshold, and subtracts one from all

DCT coefficients blow that threshold and tacks their binary values together, forming the original

binary data.

1.6. Bit-O-Steg Method - Background*

Data Hiding Methods

Bit-O-Steg