You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+8-8Lines changed: 8 additions & 8 deletions
Original file line number
Diff line number
Diff line change
@@ -44,22 +44,22 @@ We wanted to make it easy for 70 million deaf people across the world to be inde
44
44
45
45
## Setup
46
46
47
-
* Use comand promt to setup environment by using requirements_cpu.txt and requirements_gpu.txt files.
47
+
* Use comand promt to setup environment by using install_packages.txt and install_packages_gpu.txt files.
48
48
49
-
`pyton -m pip r using requirements_cpu.txt`
49
+
`pyton -m pip r install_packages.txt`
50
50
51
51
This will help you in installing all the libraries required for the project.
52
52
53
53
## Process
54
54
55
-
* Run `set_hand_hist.py` to set the hand histogram for creating gestures.
55
+
* Run `set_hand_histogram.py` to set the hand histogram for creating gestures.
56
56
* Once you get a good histogram, save it in the code folder, or you can use the histogram created by us that can be found [here](https://github.com/harshbg/Sign-Language-Interpreter-using-Deep-Learning/blob/master/Code/hist).
57
57
* Added gestures and label them using OpenCV which uses webcam feed. by running `create_gestures.py` and stores them in a database. Alternately, you can use the gestures created by us [here](https://github.com/harshbg/Sign-Language-Interpreter-using-Deep-Learning/tree/master/Code).
58
-
* Add different variations to the captured gestures by flipping all the images by using `flip_images.py`.
58
+
* Add different variations to the captured gestures by flipping all the images by using `Rotate_images.py`.
59
59
* Run `load_images.py` to split all the captured gestures into training, validation and test set.
60
-
* To view all the gestures, run `display_all_gestures.py` .
61
-
* Train the model using Keras by running `cnn_keras.py`.
62
-
* Run `fun_util.py`. This will open up the gesture recognition window which will use your webcam to interpret the trained American Sign Language gestures.
60
+
* To view all the gestures, run `display_gestures.py` .
61
+
* Train the model using Keras by running `cnn_model_train.py`.
62
+
* Run `final.py`. This will open up the gesture recognition window which will use your webcam to interpret the trained American Sign Language gestures.
63
63
64
64
## Code Examples
65
65
@@ -164,4 +164,4 @@ If you loved what you read here and feel like we can collaborate to produce some
164
164
just want to shoot a question, please feel free to connect with me on <ahref="[email protected]"target="_blank">email</a>,
165
165
<ahref="http://bit.ly/2uOIUeo"target="_blank">LinkedIn</a>, or
0 commit comments