Skip to content

Commit c23d238

Browse files
authored
Update README.md
1 parent dd6ee14 commit c23d238

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -44,22 +44,22 @@ We wanted to make it easy for 70 million deaf people across the world to be inde
4444

4545
## Setup
4646

47-
* Use comand promt to setup environment by using requirements_cpu.txt and requirements_gpu.txt files.
47+
* Use comand promt to setup environment by using install_packages.txt and install_packages_gpu.txt files.
4848

49-
`pyton -m pip r using requirements_cpu.txt`
49+
`pyton -m pip r install_packages.txt`
5050

5151
This will help you in installing all the libraries required for the project.
5252

5353
## Process
5454

55-
* Run `set_hand_hist.py` to set the hand histogram for creating gestures.
55+
* Run `set_hand_histogram.py` to set the hand histogram for creating gestures.
5656
* Once you get a good histogram, save it in the code folder, or you can use the histogram created by us that can be found [here](https://github.com/harshbg/Sign-Language-Interpreter-using-Deep-Learning/blob/master/Code/hist).
5757
* Added gestures and label them using OpenCV which uses webcam feed. by running `create_gestures.py` and stores them in a database. Alternately, you can use the gestures created by us [here](https://github.com/harshbg/Sign-Language-Interpreter-using-Deep-Learning/tree/master/Code).
58-
* Add different variations to the captured gestures by flipping all the images by using `flip_images.py`.
58+
* Add different variations to the captured gestures by flipping all the images by using `Rotate_images.py`.
5959
* Run `load_images.py` to split all the captured gestures into training, validation and test set.
60-
* To view all the gestures, run `display_all_gestures.py` .
61-
* Train the model using Keras by running `cnn_keras.py`.
62-
* Run `fun_util.py`. This will open up the gesture recognition window which will use your webcam to interpret the trained American Sign Language gestures.
60+
* To view all the gestures, run `display_gestures.py` .
61+
* Train the model using Keras by running `cnn_model_train.py`.
62+
* Run `final.py`. This will open up the gesture recognition window which will use your webcam to interpret the trained American Sign Language gestures.
6363

6464
## Code Examples
6565

@@ -164,4 +164,4 @@ If you loved what you read here and feel like we can collaborate to produce some
164164
just want to shoot a question, please feel free to connect with me on <a href="[email protected]" target="_blank">email</a>,
165165
<a href="http://bit.ly/2uOIUeo" target="_blank">LinkedIn</a>, or
166166
<a href="http://bit.ly/2CZv1i5" target="_blank">Twitter</a>.
167-
My other projects can be found [here](http://bit.ly/2UlyFgC).
167+
My other projects can be found [here](http://bit.ly/2UlyFgC).

0 commit comments

Comments
 (0)