You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was able to take a look at the EHT case study - I think they look really good! You all have certainly done a nice job researching everything!
I had gone through and explored the EHT and GWO examples for a presentation back in January, which included using eht-imaging and pycbc to reproduce the black hole image and GWO "chirp" signal, respectively. I just wanted to share a little feedback I had after having taken a read through the EHT case study:
EHT Case Study
This technique is used to synchronize these telescopes deployed across the globe to form one huge, Earth-size telescope capable of observing at a wavelength of 1.3 mm. This means EHT can achieve an angular resolution of 20 micro-arcseconds — enough to read a newspaper in New York from a sidewalk café in Paris!
The wording here seems to conflate the observation wavelength and the angular resolution. Note that the angular resolution is a consequence of the VLBI approach.
To study the most extreme objects in the Universe predicted by Einstein’s theory of general relativity, during the centennial year of the historic experiment that first confirmed the theory (2017).
The experiment being referred to here (Sir Arthur Eddington's observation of starlight bending during a solar eclipse) occurred in 1919, not 1917. Also - I wouldn't necessarily call this bullet point an "imaging objective".
Black holes have been theoretically predicted and observed but a real image was never created until now.
It's not clear what is meant by a "real" image - perhaps a different word like "direct"?
Re: the challenges section
As you note, many different software tools were used throughout the entire analysis pipeline; however, the role that NumPy plays in the entire pipeline is less clear than it is in eht-imaging. For example, some of the tools used in collection/calibration (e.g. HOPS, AIPS) don't have the clear dependencies on the scientific python ecosystem that eht-imaging does. From a NumPy-centric view, it might be better to focus more on the imaging and less on the pre-processing components.
I really like the dependency graphic (in fact, I used it in my presentation - thanks for putting it together)!
Imaging of M87 black hole is a major scientific feat that was almost presumed impossible a century ago
This statement is technically true, though I'm not aware of anyone having proposed to image a black hole a century ago.
A final thought on the last graphic: I don't think the examples actually map to NumPy that well. It's not clear to me that NumPy played a significant role in handling the data at scale (350 TB/day). The "processed" data used for imaging are much more manageable. Similarly, it's not clear how complexity maps to NumPy - maybe something like "Flexibility" instead? Finally, speed can be a little misleading too - especially since the imaging depends on external libraries for FFTs rather than numpy.fft.
This was just a quick once-over. I'm happy to create a PR with suggestions, but I wasn't sure whether such changes were being sought. Let me know!
I plan to go over the GWO case study as well ASAP.
The text was updated successfully, but these errors were encountered:
@rossbar, many thanks for your valuable inputs and reviewing it. Please go ahead and create a PR with suggestions. I'd certainly like to make sure that the case study is more accurate and fine tune it for better clarity.
Looking forward to your inputs on GW - LIGO one as well.
@rossbar are you still planning to put this in a PR? That would be very welcome! We're getting close to being ready for launch, if you could prioritize this that would be super helpful.
I was able to take a look at the EHT case study - I think they look really good! You all have certainly done a nice job researching everything!
I had gone through and explored the EHT and GWO examples for a presentation back in January, which included using
eht-imaging
andpycbc
to reproduce the black hole image and GWO "chirp" signal, respectively. I just wanted to share a little feedback I had after having taken a read through the EHT case study:EHT Case Study
The wording here seems to conflate the observation wavelength and the angular resolution. Note that the angular resolution is a consequence of the VLBI approach.
The experiment being referred to here (Sir Arthur Eddington's observation of starlight bending during a solar eclipse) occurred in 1919, not 1917. Also - I wouldn't necessarily call this bullet point an "imaging objective".
It's not clear what is meant by a "real" image - perhaps a different word like "direct"?
Re: the challenges section
As you note, many different software tools were used throughout the entire analysis pipeline; however, the role that NumPy plays in the entire pipeline is less clear than it is in
eht-imaging
. For example, some of the tools used in collection/calibration (e.g. HOPS, AIPS) don't have the clear dependencies on the scientific python ecosystem thateht-imaging
does. From a NumPy-centric view, it might be better to focus more on the imaging and less on the pre-processing components.I really like the dependency graphic (in fact, I used it in my presentation - thanks for putting it together)!
This statement is technically true, though I'm not aware of anyone having proposed to image a black hole a century ago.
numpy.fft
.This was just a quick once-over. I'm happy to create a PR with suggestions, but I wasn't sure whether such changes were being sought. Let me know!
I plan to go over the GWO case study as well ASAP.
The text was updated successfully, but these errors were encountered: