Why HDMI out instead of Display Port out?

Hello, I’ve been following the MNT Reform project for a while and just put in my order for the DIY kit. I’m super hyped for it but I had a question about it. Why was it decided to go with an HDMI out instead of a Display Port out? My understanding that Display Port is open but HDMI is proprietary so it has me a bit confused.

My understanding - which could be entirely wrong, bear in mind - is that for HDMI to be HDMI it has to be certified as HDMI, which costs money. If you don’t certify your product as HDMI, then it’s not actually HDMI: it’s effectively DVI but with an HDMI connector.

See this Raspberry Pi Pico add-on, which has an HDMI connector but is very careful to explain it’s actually a DVI signal.

It’s exactly the same thing as when companies refer to their SD Card slot as a “TF Card” slot: they’re technically the exact same thing, it’s just the former says that you’ve paid the SD Card Association money to certify your product.

That said, I have no idea whether MNT’s paid for licensing or not - but it would not surprise me at all if it was HDMI in shape only, and was uncertified DVI-over-HDMI. That way it offers the best of both worlds: it’s still open and didn’t involve paying anyone a licensing fee, but it’s using the connector that every TV and most monitors out there have so you’re not scrabbling around for adapters.

3 Likes

That makes a lot of sense.
Thanks for the reply, I really appreciate it!

Now that you raise the question, I would be interested in the answer as well.

The data sheet for the used i.MX 8M processor mentions there’s a PHY which can generate HDMI and DisplayPort signals. (Chapter “13.5 HDMI / DisplayPort Transmit PHY (HDMI PHY)” in “i.MX 8M Dual/8M QuadLite/8M Quad Applications Processors Reference Manual”)

As far as I understand a binary blob is needed for HDMI support - likely for HDCP - and it would be nice if DP worked without the blob. I suspect it doesn’t.

There’s another video protocol supported by the processor called DSI which is often used for screens on phones. On the reform, the processor outputs DSI signals to a bridge chip which transforms these to eDP signals and connects to the internal display.

Since the HDMI PHY is also documented to support eDP, the screen could theoretically also be connected to the PHY directly, without requiring the bridge chip.

It’s likely that DSI is used for the internal screen to enable higher resolution output to an external screen. DSI is limited to 4 lanes at 1.5 GHz (if I skimmed the data sheet correctly), which should support 24bit 1440p@60 Hz if my math checks out (Simplified Stream Bit Rate without sync periods 2560 * 1440 * 60 * 24 ~ 5.3 Gbps, DSI bandwidth = 4 * 1.5 GHz = 6 Gbps). The processor is advertised to support 4k@60, so the HDMI PHY would need to be used to achieve this.

For why HDMI instead of DP, I think ghalfacree’s answer of using the connector used elsewhere seems likely.

1 Like

I performed some experiments, using NEC P241W monitor (resolution 1920x1200, DVI input), LG 22EA53 (resolution 1920x1080, DVI and HDMI inputs) and TV (resolution 1920x1080, HDMI input). As video sources I used MNT reform (set to display mode dual) and Purism Librem 13 (also MacBookPro from 2015 and small computer, but results were the same as for Librem 13, so I’ll focus on it and Reform).

Regardless of external monitor used, when I started Reform with HDMI cable in, image was only displayed on external monitor (nothing on built-in screen). Then, when I run sway, image was displayed on both screens, but internal one was unstable - it was oscillating between workspace 1 and 2, with frequency of about 1-2Hz. It made working impossible. On the other hand video displayed on external monitor was stable and usable. This situation occured with all external displays.

When I exited sway, again image on internal display disappeared, and was only displayed on external display.

When I started Reform without HDMI cable connected, and connected it after logging in (still in text/CLI mode), external monitor was not displaying anything. Image appeared after running sway - but with the same problems with blinking on internal screen.

But when I first started sway, and only then connected HDMI cable - everything was OK. Image was usable on both screens, and system was stable. I was able to run and use applications on both screens.

This was independent on external screen (NEC, LG, or TV) and on connection on its side (HDMI or DVI, so cable was either HDMI-HDMI or HDMI-DVI). Additional information - I have connected NVMe disk, but system was started and used SD card; during those tests NVMe was not accessed.

It’s not big problem for me - and I’m not sure if this can be solved with blob managing HDMI output. But I wanted to put this information, to see if others noticed similar behavior.

Post is getting long - so I’ll write about differences in available resolutions later.

1 Like

It is a known (probably kernel or mesa) bug that messes up the display when plugging in HDMI before starting sway. I believe the issue is related to using the same GPU with two different display engines / framebuffers (DCSS and LCDIF) which is probably a less tested edge case.

Regarding the other topic: To my knowledge, the same HDMI blob is required for for DP out. I chose HDMI because it is more widespread and thus the utility of having an HDMI output appeared to be higher.

1 Like

Promised second part of my experiences with HDMI and DVI connections.

Cable HDMI-HDMI offers full list of resolutions offered by monitor. When I used it with LG monitor (1920x1080), swaymsg -t get_outputs shown many resolutions - from 640x480 till native 1920x1080. The same with TV.

On the other hand, when I used HDMI-DVI cable, the same monitor offered only 1280x720 and 1920x1080. This puts above suggestion (by @ghalfacree ) regarding possible DVI-over-HDMI in doubt (at least it does not offer all features), but I don’t have explanation for it.

Situation with NEC monitor (1920x1200) is even more complex. The only offered resolution is 1280x720 (using DVI, as monitor does not have HDMI input). It’s one of two resolution offered by LG when using DVI, but higher resolution is not offered. Both swaymsg -t get_outputs and dmesg show only this resolution, and trying to force anything other fails. I tried on 3 different cables. On the other hand, when I try the same on Purism Librem 13 (again with HDMI-DVI cable) I get full list of resolutions - but this list does not contain 1920x1080. So I guess i.MX 8M only supports limited set of resolutions, possibly only with 16:9 or similar ratio.

As HDMI depends on closed blob, there is not much we can do now. But I wanted to document my experiences; if someone has more data, please share.

There is a correct and an incorrect observation here:

It is true that the list of HDMI resolutions is fixed. But it has nothing to do with the blob. The blob is not a driver. It is only a tiny bit of control code that runs in the HDMI block’s MCU and as far as i know it is mostly concerned with HDCP. The driver itself is open source. I ported it over from the vendor kernel (as mainline does not have the HDMI/DP TX driver yet AFAIK).

Two points of failure could be something going wrong with EDID over DVI or the frequencies missing from this table:

I had to modify this table once to make a 1080*1920 (portrait) display work which needed a previously unsupported timing.

2 Likes

Thanks for details!
I’m glad to be wrong here; already cloned repository, and intend to experiment with different modes. But first want to understand all those numbers :smile: