Skip to content

Commit 3170f30

Browse files
authored
Merge pull request #237 from vidartf/stream
Add ipywebrtc captureStream interface
2 parents 156d452 + 646ad19 commit 3170f30

File tree

4 files changed

+372
-2
lines changed

4 files changed

+372
-2
lines changed

examples/Capture.ipynb

+289
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,289 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# Capture outputs\n",
8+
"\n",
9+
"This notebook will demonstrate how to capture still frames or videos from pythreejs using [ipywebrtc](https://ipywebrtc.readthedocs.io/en/latest/)."
10+
]
11+
},
12+
{
13+
"cell_type": "markdown",
14+
"metadata": {},
15+
"source": [
16+
"## Setup an example renderer"
17+
]
18+
},
19+
{
20+
"cell_type": "code",
21+
"execution_count": null,
22+
"metadata": {},
23+
"outputs": [],
24+
"source": [
25+
"from pythreejs import *\n",
26+
"import ipywebrtc\n",
27+
"from ipywidgets import Output, VBox"
28+
]
29+
},
30+
{
31+
"cell_type": "code",
32+
"execution_count": null,
33+
"metadata": {},
34+
"outputs": [],
35+
"source": [
36+
"view_width = 600\n",
37+
"view_height = 400\n",
38+
"\n",
39+
"sphere = Mesh(\n",
40+
" SphereBufferGeometry(1, 32, 16),\n",
41+
" MeshStandardMaterial(color='red')\n",
42+
")\n",
43+
"\n",
44+
"cube = Mesh(\n",
45+
" BoxBufferGeometry(1, 1, 1),\n",
46+
" MeshPhysicalMaterial(color='green'),\n",
47+
" position=[2, 0, 4]\n",
48+
")\n",
49+
"\n",
50+
"camera = PerspectiveCamera( position=[10, 6, 10], aspect=view_width/view_height)\n",
51+
"key_light = DirectionalLight(position=[0, 10, 10])\n",
52+
"ambient_light = AmbientLight()\n",
53+
"\n",
54+
"scene = Scene(children=[sphere, cube, camera, key_light, ambient_light])\n",
55+
"controller = OrbitControls(controlling=camera)\n",
56+
"renderer = Renderer(camera=camera, scene=scene, controls=[controller],\n",
57+
" width=view_width, height=view_height)"
58+
]
59+
},
60+
{
61+
"cell_type": "code",
62+
"execution_count": null,
63+
"metadata": {},
64+
"outputs": [],
65+
"source": [
66+
"renderer"
67+
]
68+
},
69+
{
70+
"cell_type": "markdown",
71+
"metadata": {},
72+
"source": [
73+
"## Capture renderer output to stream"
74+
]
75+
},
76+
{
77+
"cell_type": "code",
78+
"execution_count": null,
79+
"metadata": {},
80+
"outputs": [],
81+
"source": [
82+
"stream = ipywebrtc.WidgetStream(widget=renderer, max_fps=30)"
83+
]
84+
},
85+
{
86+
"cell_type": "markdown",
87+
"metadata": {},
88+
"source": [
89+
"If you want, you can preview the content of the stream with a video-viewer. This should simply mirror what you see in the renderer."
90+
]
91+
},
92+
{
93+
"cell_type": "code",
94+
"execution_count": null,
95+
"metadata": {
96+
"scrolled": false
97+
},
98+
"outputs": [],
99+
"source": [
100+
"stream"
101+
]
102+
},
103+
{
104+
"cell_type": "markdown",
105+
"metadata": {},
106+
"source": [
107+
"## Capturing images\n",
108+
"\n",
109+
"To capture images from the stream, use the `ImageRecorder` widget from `ipywebrtc`."
110+
]
111+
},
112+
{
113+
"cell_type": "code",
114+
"execution_count": null,
115+
"metadata": {},
116+
"outputs": [],
117+
"source": [
118+
"recorder = ipywebrtc.ImageRecorder(filename='snapshot', format='png', stream=stream)"
119+
]
120+
},
121+
{
122+
"cell_type": "markdown",
123+
"metadata": {},
124+
"source": [
125+
"There are two ways to capture images from the stream:\n",
126+
"1. Manually from the browser by using the widget view of the recorder.\n",
127+
"2. Programmatically using the .save()/download() method on the recorder."
128+
]
129+
},
130+
{
131+
"cell_type": "markdown",
132+
"metadata": {},
133+
"source": [
134+
"### Using the view"
135+
]
136+
},
137+
{
138+
"cell_type": "code",
139+
"execution_count": null,
140+
"metadata": {},
141+
"outputs": [],
142+
"source": [
143+
"recorder"
144+
]
145+
},
146+
{
147+
"cell_type": "markdown",
148+
"metadata": {},
149+
"source": [
150+
"Here,clicking the \"Snapshot\" button will capture a new frame and sync it back to the kernel side. Clicking \"Download\" will download the current snapshot on the *client side*. When taking a snapshot, the image will also be synced to the *kernel side*. If the image has changed, any observers of the value trait of the image will trigger (i.e. `recorder.image.observe(callback, 'value')`):"
151+
]
152+
},
153+
{
154+
"cell_type": "code",
155+
"execution_count": null,
156+
"metadata": {},
157+
"outputs": [],
158+
"source": [
159+
"out = Output() # To capture print output\n",
160+
"\n",
161+
"@out.capture()\n",
162+
"def on_capture(change):\n",
163+
" print('Captured image changed!')\n",
164+
"recorder.image.observe(on_capture, 'value')\n",
165+
"out"
166+
]
167+
},
168+
{
169+
"cell_type": "markdown",
170+
"metadata": {},
171+
"source": [
172+
"### Using kernel API:\n",
173+
"\n",
174+
"To request a snapshot from the kernel, set the `recording` attribute of the recorder to `True`. This will update the `image` attribute asynchronously. The easiest way to save this to the kernel side is to also set the `filename` attribute, and set `autosave` to `True`. This will cause the image to be saved as soon as it is available. This is equivalend to observing the image widget's `value` trait, and calling the `save()` method when the image changes."
175+
]
176+
},
177+
{
178+
"cell_type": "code",
179+
"execution_count": null,
180+
"metadata": {},
181+
"outputs": [],
182+
"source": [
183+
"recorder.autosave = True\n",
184+
"recorder.recording = True"
185+
]
186+
},
187+
{
188+
"cell_type": "markdown",
189+
"metadata": {},
190+
"source": [
191+
"You can also trigger a client-side download from the kernel by calling the `download()` method on the recorder:"
192+
]
193+
},
194+
{
195+
"cell_type": "code",
196+
"execution_count": null,
197+
"metadata": {},
198+
"outputs": [],
199+
"source": [
200+
"recorder.download()"
201+
]
202+
},
203+
{
204+
"cell_type": "markdown",
205+
"metadata": {},
206+
"source": [
207+
"## Capturing video\n",
208+
"\n",
209+
"To capture a video from the stream, use the `VideoRecorder` from `ipywebrtc`."
210+
]
211+
},
212+
{
213+
"cell_type": "code",
214+
"execution_count": null,
215+
"metadata": {},
216+
"outputs": [],
217+
"source": [
218+
"video_recorder = ipywebrtc.VideoRecorder(stream=stream, filename='video', codecs='vp8')"
219+
]
220+
},
221+
{
222+
"cell_type": "code",
223+
"execution_count": null,
224+
"metadata": {},
225+
"outputs": [],
226+
"source": [
227+
"video_recorder"
228+
]
229+
},
230+
{
231+
"cell_type": "markdown",
232+
"metadata": {},
233+
"source": [
234+
"Here, clicking the \"Record\" button will start capturing the video. Once you click the \"Stop\" button (appears after clicking \"Record\"), the video will be displayed in the view, and it will be synced to the kernel. If the video has changed, any observers of the value trait of the video will trigger, similarly to that of the `ImageRecorder`. Clicking \"Download\" will download the current video on the client side.\n",
235+
"\n",
236+
"The kernel side API for the `VideoRecorder` is similar to that of the `ImageRecorder`, but you will also have to tell it when to stop:"
237+
]
238+
},
239+
{
240+
"cell_type": "code",
241+
"execution_count": null,
242+
"metadata": {},
243+
"outputs": [],
244+
"source": [
245+
"video_recorder.autosave = True\n",
246+
"video_recorder.recording = True\n",
247+
"# After executing this, try to interact with the renderer above before executing the next cell"
248+
]
249+
},
250+
{
251+
"cell_type": "code",
252+
"execution_count": null,
253+
"metadata": {},
254+
"outputs": [],
255+
"source": [
256+
"video_recorder.recording = False\n",
257+
"video_recorder.download()"
258+
]
259+
},
260+
{
261+
"cell_type": "code",
262+
"execution_count": null,
263+
"metadata": {},
264+
"outputs": [],
265+
"source": []
266+
}
267+
],
268+
"metadata": {
269+
"kernelspec": {
270+
"display_name": "Python 3",
271+
"language": "python",
272+
"name": "python3"
273+
},
274+
"language_info": {
275+
"codemirror_mode": {
276+
"name": "ipython",
277+
"version": 3
278+
},
279+
"file_extension": ".py",
280+
"mimetype": "text/x-python",
281+
"name": "python",
282+
"nbconvert_exporter": "python",
283+
"pygments_lexer": "ipython3",
284+
"version": "3.6.6"
285+
}
286+
},
287+
"nbformat": 4,
288+
"nbformat_minor": 2
289+
}

js/package.json

+2-1
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,8 @@
2020
"build:labextension": "rimraf lab-dist && mkdirp lab-dist && cd lab-dist && npm pack ..",
2121
"build:all": "npm run build:labextension",
2222
"prepare": "npm run autogen",
23-
"prepack": "npm run build:bundles-prod"
23+
"prepack": "npm run build:bundles-prod",
24+
"watch": "webpack -d -w"
2425
},
2526
"devDependencies": {
2627
"eslint": "^5.6.0",

js/src/_base/Renderable.js

+79
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
var _ = require('underscore');
22
var widgets = require('@jupyter-widgets/base');
33
var $ = require('jquery');
4+
var Promise = require('bluebird');
45

56
var pkgName = require('../../package.json').name;
67
var EXTENSION_SPEC_VERSION = require('../version').EXTENSION_SPEC_VERSION;
@@ -90,6 +91,83 @@ var RenderableModel = widgets.DOMWidgetModel.extend({
9091
this.trigger('childchange', this);
9192
},
9293

94+
/**
95+
* Find a view, preferrably a live one
96+
*/
97+
_findView: function() {
98+
var viewPromises = Object.keys(this.views).map(function(key) {
99+
return this.views[key];
100+
}, this);
101+
return Promise.all(viewPromises).then(function(views) {
102+
for (var i=0; i<views.length; ++i) {
103+
var view = views[i];
104+
if (!view.isFrozen) {
105+
return view;
106+
}
107+
}
108+
return views[0];
109+
});
110+
},
111+
112+
/**
113+
* Interface for jupyter-webrtc.
114+
*/
115+
captureStream: function(fps) {
116+
var stream = new MediaStream();
117+
118+
var that = this;
119+
var canvasStream = null;
120+
121+
function updateStream() {
122+
return that._findView().then(function(view) {
123+
if (canvasStream !== null) {
124+
// Stop and remove tracks from previous canvas
125+
stream.getTracks().forEach(function(track) {
126+
track.stop();
127+
stream.removeTrack(track);
128+
canvasStream.removeTrack(track);
129+
});
130+
canvasStream = null;
131+
}
132+
var canvas;
133+
if (view.isFrozen) {
134+
canvas = document.createElement('canvas');
135+
canvas.width = view.$frozenRenderer.width();
136+
canvas.height = view.$frozenRenderer.height();
137+
var ctx = canvas.getContext('2d');
138+
ctx.drawImage(view.$frozenRenderer[0], 0, 0);
139+
} else {
140+
canvas = view.renderer.domElement;
141+
}
142+
// Add tracks from canvas to stream
143+
canvasStream = canvas.captureStream(fps);
144+
canvasStream.getTracks().forEach(function(track) {
145+
stream.addTrack(track);
146+
if (track.requestFrame) {
147+
(function() {
148+
var orig = track.requestFrame.bind(track);
149+
track.requestFrame = function() {
150+
orig();
151+
// Ensure we redraw to make stream pickup first frame on Chrome
152+
// https://bugs.chromium.org/p/chromium/issues/detail?id=903832
153+
view.tick();
154+
};
155+
track.requestFrame();
156+
157+
}());
158+
}
159+
});
160+
161+
// If renderer status changes, update stream
162+
that.listenToOnce(view, 'updatestream', updateStream);
163+
});
164+
}
165+
166+
return updateStream().then(function() {
167+
return stream;
168+
});
169+
},
170+
93171
}, {
94172
serializers: _.extend({
95173
clippingPlanes: { deserialize: unpackThreeModel },
@@ -199,6 +277,7 @@ var RenderableView = widgets.DOMWidgetView.extend({
199277
} else {
200278
this.renderer.setSize(width, height);
201279
}
280+
this.trigger('updatestream');
202281
},
203282

204283
updateProperties: function(force) {

0 commit comments

Comments
 (0)