Discussion:
RFC: Integrating Virgil and Spice
(too old to reply)
Hans de Goede
2013-10-08 10:25:01 UTC
Permalink
Hi All,

I realize that it may be a bit early to start this discussion,
given the somewhat preliminary state of Virgil, still I would
like to start a discussion about this now for 2 reasons:

1) I believe it would be good to start thinking about this earlier
rather then later.

2) I would like to present a general overview of a plan for this
at kvm-forum to get input on this from the wider kvm-forum community.

I've already had a quick discussion about this with Dave Airlie, and
our ideas on this aligned perfectly.

The basic idea is to use qemu's console layer (include/ui/console.h)
as an abstraction between the new virtio-vga device Dave has in mind
(which will include optional 3D rendering capability through VIRGIL),
and various display options, ie SDL, vnc and Spice.

The console layer would need some extensions for this:

1) Multi head support, a question which comes up here, is do we only
add support for multiple heads on a single card, or do we also want
to support multiple cards each driving a head here ? I myself tend
to go with the KISS solution for now and only support a single
card with multiple heads.

2) The ability for a video-card generating output to pass a dma-buf
context to the display (ui in qemu terms) to get the contents from,
rather then requiring the contents to be rendered to some memory
buffer. This way we can save the quite expensive read-back from gpu
memory of the rendered result and then copying that back to the
framebuffer of the gpu for local displays (ie gtk, SDL), we would
of course still need the read back of the rendered output for
vnc / spice.

For proper multi-head support in the ui layer for local displays,
we will need to use SDL-2, either by porting the current SDL ui code
to SDL-2, or by introducing a new SDL-2 ui component.

The changes needed to the gtk ui for multi-head support are not clear
at this moment (no-one has looked into this yet AFAIK).

Regards,

Hans


p.s.

Note that having multi-head support in qemu's console layer +
a multi-head capable SDL-2 ui, means that we could also use
a qxl device together with the SDL-2 ui to do multi-head
locally, which could be interesting for a variety of use-cases.
Gerd Hoffmann
2013-10-08 13:18:46 UTC
Permalink
Hi,
Post by Hans de Goede
The basic idea is to use qemu's console layer (include/ui/console.h)
as an abstraction between the new virtio-vga device Dave has in mind
(which will include optional 3D rendering capability through VIRGIL),
and various display options, ie SDL, vnc and Spice.
1) Multi head support, a question which comes up here, is do we only
add support for multiple heads on a single card, or do we also want
to support multiple cards each driving a head here ? I myself tend
to go with the KISS solution for now and only support a single
card with multiple heads.
Support for multiple cards is there. Well, at least the groundwork.
The ui core can deal with it. spice can deal with it. Secondary qxl
cards used to completely bypass the qemu console subsystem. This is no
longer the case with qemu 1.5+.

Not all UIs can deal with it in a sane way though. With SDL and VNC the
secondary qxl card is just another console, so ctrl-alt-<nr> can be used
to switch to it.

Once I had an experimental patch to make the gtk ui open a second window
for the secondary card. Didn't end up upstream, not in my git tree any
more, IIRC I've dropped it at one of the rebases. Isn't hard to do
though.


That leaves the question how to do single-card multihead. I think the
most sensible approach here is to go the spice route, i.e. have one big
framebuffer and define scanout rectangles for the virtual monitors.
This is how real hardware works, and is also provides a natural fallback
mode for UIs not supporting scanout rectangles: They show a single
window with the whole framebuffer, simliar to old spice clients.

To get that done we effectively have to handle the monitor config
properly at qemu console level instead of having a private channel
between qxl and spice.
Post by Hans de Goede
2) The ability for a video-card generating output to pass a dma-buf
context to the display (ui in qemu terms) to get the contents from,
rather then requiring the contents to be rendered to some memory
buffer. This way we can save the quite expensive read-back from gpu
memory of the rendered result and then copying that back to the
framebuffer of the gpu for local displays (ie gtk, SDL),
Hmm? Not sure what you are asking for...

First, reading from gpu memory isn't expensive. It's all virtual, no
slow read cycles as with real hardware. There is almost no difference
between gpu memory and main memory for kvm guests. It's not clear to me
why you are copying stuff from/to gpu memory.

Second, you can have your scanout framebuffer in main memory. That
isn't a problem at all. It only needs to be contiguous in guest
physical memory, scatter-gather for the framebuffer isn't going to fly.
Post by Hans de Goede
For proper multi-head support in the ui layer for local displays,
we will need to use SDL-2, either by porting the current SDL ui code
to SDL-2, or by introducing a new SDL-2 ui component.
/me votes for new SDL-2 ui component, the historical grown SDL code can
use a rewrite anyway ;)

cheers,
Gerd
Hans de Goede
2013-10-08 13:37:56 UTC
Permalink
Hi,
Post by Gerd Hoffmann
Hi,
Post by Hans de Goede
The basic idea is to use qemu's console layer (include/ui/console.h)
as an abstraction between the new virtio-vga device Dave has in mind
(which will include optional 3D rendering capability through VIRGIL),
and various display options, ie SDL, vnc and Spice.
<snip>
Post by Gerd Hoffmann
Post by Hans de Goede
2) The ability for a video-card generating output to pass a dma-buf
context to the display (ui in qemu terms) to get the contents from,
rather then requiring the contents to be rendered to some memory
buffer. This way we can save the quite expensive read-back from gpu
memory of the rendered result and then copying that back to the
framebuffer of the gpu for local displays (ie gtk, SDL),
Hmm? Not sure what you are asking for...
First, reading from gpu memory isn't expensive. It's all virtual, no
slow read cycles as with real hardware. There is almost no difference
between gpu memory and main memory for kvm guests. It's not clear to me
why you are copying stuff from/to gpu memory.
This is mostly Dave's area of expertise, but let me try to explain things
a bit better here. The dma-buf pass-through is for the Virgil case, so
we're passing through 3D rendering commands from the guest to a real,
physcial GPU inside the host, which then renders the final image to show
inside the ui to its own, potentially on card, memory, reading from which
is expensive.

When displaying locally (so SDL-2 or gtk ui), we want to avoid the read by
passing a kernel dma_buf handle from the virtual card (in this case
virtio-vga with Virgil) to the ui (in this case SDL-2), so it can then
directly ask the GPU to blit from that dma_buf to its own visible surface.
Post by Gerd Hoffmann
Second, you can have your scanout framebuffer in main memory. That
isn't a problem at all. It only needs to be contiguous in guest
physical memory, scatter-gather for the framebuffer isn't going to fly.
This is not about the virtual gpu / virtual scanout buffer, this is
about a real GPU used to do the (final) rendering and about getting
that rendering shown to a local user (ie SDL or gtk ui).

So the rendered image is stored in memory owned by the real GPU, and
we need to get this "copied" to the window the user is viewing, without
using the CPU.
Post by Gerd Hoffmann
Post by Hans de Goede
For proper multi-head support in the ui layer for local displays,
we will need to use SDL-2, either by porting the current SDL ui code
to SDL-2, or by introducing a new SDL-2 ui component.
/me votes for new SDL-2 ui component, the historical grown SDL code can
use a rewrite anyway ;)
I was already expecting you would prefer the new SDL-2 ui component
solution :)

Regards,

Hans
Gerd Hoffmann
2013-10-08 16:05:54 UTC
Permalink
Hi,
Post by Hans de Goede
This is mostly Dave's area of expertise, but let me try to explain things
a bit better here. The dma-buf pass-through is for the Virgil case, so
we're passing through 3D rendering commands from the guest to a real,
physcial GPU inside the host, which then renders the final image to show
inside the ui to its own, potentially on card, memory, reading from which
is expensive.
Ah, host dma-buf not guest dma-buf. It makes more sense then.

So virgil just opens one of those new render-only drm nodes, asks the
gpu to process the rendering ops from the guest & store the results in a
dma-buf, then this dma-buf must be displayed somehow, correct?
Post by Hans de Goede
When displaying locally (so SDL-2 or gtk ui), we want to avoid the read by
passing a kernel dma_buf handle from the virtual card (in this case
virtio-vga with Virgil) to the ui (in this case SDL-2), so it can then
directly ask the GPU to blit from that dma_buf to its own visible surface.
Hmm. Both SDL and gtk ui have the problem that they effectively bind
your VM to the desktop session. Which is not what you want for anything
but short-running test VMs. It's also a PITA to maintain them with
libvirt.

Any plans for a separate UI process? Something using a unix socket for
control commands and to hand over a dma-buf handle using fd descriptor
passing maybe?

cheers,
Gerd
Marc-André Lureau
2013-10-08 16:20:11 UTC
Permalink
Hi
Post by Gerd Hoffmann
Any plans for a separate UI process? Something using a unix socket for
control commands and to hand over a dma-buf handle using fd descriptor
passing maybe?
It sounds to me like this is something that an egl extension should provide, but I can't find it yet.
Dave Airlie
2013-10-08 22:51:13 UTC
Permalink
Post by Gerd Hoffmann
Ah, host dma-buf not guest dma-buf. It makes more sense then.
yes host side for the viewer.
Post by Gerd Hoffmann
So virgil just opens one of those new render-only drm nodes, asks the
gpu to process the rendering ops from the guest & store the results in a
dma-buf, then this dma-buf must be displayed somehow, correct?
Yes, the viewer would essentially be a compositing process, taking the outputs
from multiple VMs and deciding where to draw them. I suppose like boxes
does now.
Post by Gerd Hoffmann
Post by Hans de Goede
When displaying locally (so SDL-2 or gtk ui), we want to avoid the read by
passing a kernel dma_buf handle from the virtual card (in this case
virtio-vga with Virgil) to the ui (in this case SDL-2), so it can then
directly ask the GPU to blit from that dma_buf to its own visible surface.
Hmm. Both SDL and gtk ui have the problem that they effectively bind
your VM to the desktop session. Which is not what you want for anything
but short-running test VMs. It's also a PITA to maintain them with
libvirt.
Yeah I've hit that. So far virgil is only running via libvirt with a very hacked
insecure config to draw to the local X server of my user. Getting past that
is however going to involve a bit of lifting all over the place.
Post by Gerd Hoffmann
Any plans for a separate UI process? Something using a unix socket for
control commands and to hand over a dma-buf handle using fd descriptor
passing maybe?
That would be the local rendering solution I think we'd prefer,

qemu runs as qemu user, uses EGL to talk to the drm render-nodes,
has some sort of unix socket that the viewer connects to and can hand
fds across, then the client viewer uses EGL or GLX to render on-screen
and import the handles into EGL and displays the contents, there may
be a small bit of sync info to send across.

For remoting then we'd have an extra readback (slow) from the GPU and
then spice or vnc encoding stages.

Dave.
Marc-André Lureau
2013-10-09 01:26:50 UTC
Permalink
----- Original Message -----
Post by Dave Airlie
Post by Gerd Hoffmann
Ah, host dma-buf not guest dma-buf. It makes more sense then.
yes host side for the viewer.
Post by Gerd Hoffmann
So virgil just opens one of those new render-only drm nodes, asks the
gpu to process the rendering ops from the guest & store the results in a
dma-buf, then this dma-buf must be displayed somehow, correct?
Yes, the viewer would essentially be a compositing process, taking the outputs
from multiple VMs and deciding where to draw them. I suppose like boxes
does now.
Boxes, for the display part, is just plain gtk+, regular x11 cairo (the animation part is using clutter/gl, but we bypass that for display)

Although this is limited to local rendering, we could teach the spice-gtk API to provide a gl texture or RBO. The client interface and further integration for this could be prototyped using the spice 2d gl canvas today. For gtk/cairo, there is a cairo_gl_surface_create_for_texture(), but for some reason it is not advertized. I also wonder if somebody tried to teach gtk+ to use a cairo gl surface (apparently not upstream). Without this, cairo will have to do ReadPixels... Or the gtk widget could embedd its own GL window, that's probably the way to go. I wondered how webkit-gtk does webgl. It looks like they use an offscreen gl window. I will try to verify this.
Post by Dave Airlie
Post by Gerd Hoffmann
Post by Hans de Goede
When displaying locally (so SDL-2 or gtk ui), we want to avoid the read by
passing a kernel dma_buf handle from the virtual card (in this case
virtio-vga with Virgil) to the ui (in this case SDL-2), so it can then
directly ask the GPU to blit from that dma_buf to its own visible surface.
Hmm. Both SDL and gtk ui have the problem that they effectively bind
your VM to the desktop session. Which is not what you want for anything
but short-running test VMs. It's also a PITA to maintain them with
libvirt.
Yeah I've hit that. So far virgil is only running via libvirt with a very hacked
insecure config to draw to the local X server of my user. Getting past that
is however going to involve a bit of lifting all over the place.
Post by Gerd Hoffmann
Any plans for a separate UI process? Something using a unix socket for
control commands and to hand over a dma-buf handle using fd descriptor
passing maybe?
That would be the local rendering solution I think we'd prefer,
qemu runs as qemu user, uses EGL to talk to the drm render-nodes,
has some sort of unix socket that the viewer connects to and can hand
fds across, then the client viewer uses EGL or GLX to render on-screen
and import the handles into EGL and displays the contents, there may
be a small bit of sync info to send across.
For remoting then we'd have an extra readback (slow) from the GPU and
then spice or vnc encoding stages.
That would possibly open the possibility to run the remote server out of qemu.
Steven Newbury
2013-10-09 05:36:45 UTC
Permalink
Post by Dave Airlie
That would be the local rendering solution I think we'd prefer,
qemu runs as qemu user, uses EGL to talk to the drm render-nodes,
has some sort of unix socket that the viewer connects to and can hand
fds across, then the client viewer uses EGL or GLX to render on-screen
and import the handles into EGL and displays the contents, there may
be a small bit of sync info to send across.
For remoting then we'd have an extra readback (slow) from the GPU and
then spice or vnc encoding stages.
For the non-local case wouldn't it be possible to have the GPU render directly to a shared buffer in system RAM rather than to the GPU memory and reading back?
Dave Airlie
2013-10-09 06:31:14 UTC
Permalink
Post by Steven Newbury
Post by Dave Airlie
That would be the local rendering solution I think we'd prefer,
qemu runs as qemu user, uses EGL to talk to the drm render-nodes,
has some sort of unix socket that the viewer connects to and can hand
fds across, then the client viewer uses EGL or GLX to render on-screen
and import the handles into EGL and displays the contents, there may
be a small bit of sync info to send across.
For remoting then we'd have an extra readback (slow) from the GPU and
then spice or vnc encoding stages.
For the non-local case wouldn't it be possible to have the GPU render
directly to a shared buffer in system RAM rather than to the GPU memory and
reading back?
No that generally is a really bad idea, since things like blending
involve reading back from the video ram,
and it would generally end up being worse for framerates than reading back.

Dave.
Dave Airlie
2013-10-08 22:46:15 UTC
Permalink
Post by Gerd Hoffmann
That leaves the question how to do single-card multihead. I think the
most sensible approach here is to go the spice route, i.e. have one big
framebuffer and define scanout rectangles for the virtual monitors.
This is how real hardware works, and is also provides a natural fallback
mode for UIs not supporting scanout rectangles: They show a single
window with the whole framebuffer, simliar to old spice clients.
No real hw doesn't work like that, that is how we program real hw at the moment,
but for example wayland won't do this, and neither does Windows mostly,

real hw can have multiple separate framebuffers with separate strides,
and separate
scanouts from them, and the kms device drivers fully support this mode
of operation,
just the X server prevents it from being useable.

I'd ideally want to have a window per gpu output, since the idea would
be to allow them
to be placed on different monitors on the host side, and doing it as a
single app might
limit the possiblities.

The other thing is for virgil to work at all we need to render the
whole console using
GL interfaces, at the moment I just use glDrawPixels in the SDL ui
update when in GL
mode, so there is no direct access to the framebuffer in this case
anyways, its all
just GL rendered.
Post by Gerd Hoffmann
Post by Hans de Goede
2) The ability for a video-card generating output to pass a dma-buf
context to the display (ui in qemu terms) to get the contents from,
rather then requiring the contents to be rendered to some memory
buffer. This way we can save the quite expensive read-back from gpu
memory of the rendered result and then copying that back to the
framebuffer of the gpu for local displays (ie gtk, SDL),
Hmm? Not sure what you are asking for...
First, reading from gpu memory isn't expensive. It's all virtual, no
slow read cycles as with real hardware. There is almost no difference
between gpu memory and main memory for kvm guests. It's not clear to me
why you are copying stuff from/to gpu memory.
Second, you can have your scanout framebuffer in main memory. That
isn't a problem at all. It only needs to be contiguous in guest
physical memory, scatter-gather for the framebuffer isn't going to fly.
Scatter-gather for the framebuffer is fine as long as
its not VESA LFB. I already have virtio-vga device allocating a
non-contig framebuffer
and just using DMA operations to move data in/out.

Dave.
Gerd Hoffmann
2013-10-09 08:44:58 UTC
Permalink
Post by Dave Airlie
Post by Gerd Hoffmann
That leaves the question how to do single-card multihead. I think the
most sensible approach here is to go the spice route, i.e. have one big
framebuffer and define scanout rectangles for the virtual monitors.
This is how real hardware works, and is also provides a natural fallback
mode for UIs not supporting scanout rectangles: They show a single
window with the whole framebuffer, simliar to old spice clients.
No real hw doesn't work like that, that is how we program real hw at the moment,
but for example wayland won't do this, and neither does Windows mostly,
real hw can have multiple separate framebuffers with separate strides,
and separate
scanouts from them, and the kms device drivers fully support this mode
of operation,
just the X server prevents it from being useable.
Ok. So scratch that idea. It's probably better to have the gfx card
register multiple QemuConsoles then (one for each virtual connector),
with some infrastructure bits in the ui core + frontends to allow
enabling/disabling them.
Post by Dave Airlie
The other thing is for virgil to work at all we need to render the
whole console using
GL interfaces, at the moment I just use glDrawPixels in the SDL ui
update when in GL
mode, so there is no direct access to the framebuffer in this case
anyways, its all
just GL rendered.
When the guests virtual gfx card doesn't let the gpu render into a
dma-buf we have to copy the bits anyway. Ideally just memcpy from guest
framebuffer to a dma-buf (not sure drm allows that), so we can hand out
a dma-buf handle for rendering no matter whenever the guest uses virgil
or cirrus.
Post by Dave Airlie
Post by Gerd Hoffmann
Second, you can have your scanout framebuffer in main memory. That
isn't a problem at all. It only needs to be contiguous in guest
physical memory, scatter-gather for the framebuffer isn't going to fly.
Scatter-gather for the framebuffer is fine as long as
its not VESA LFB. I already have virtio-vga device allocating a
non-contig framebuffer
and just using DMA operations to move data in/out.
Sure it's possible. All qemu UIs want a linear framebuffer they can
operate on though. So with scatter-gather you have to copy the data
into a linear buffer in qemu memory. Without scatter-gather you can
pass a reference to guest memory to the UIs and avoid the extra copy.

May not matter if you offload the work to the gpu anyway.

What is virtio-vga btw? The virgil virtual vga device or something
else?

cheers,
Gerd
Hans de Goede
2013-10-09 08:53:45 UTC
Permalink
Hi,

On 10/09/2013 10:44 AM, Gerd Hoffmann wrote:

<snip>
Post by Gerd Hoffmann
What is virtio-vga btw? The virgil virtual vga device
Yes, see:

http://airlied.livejournal.com/78104.html

Regards,

Hans
Gerd Hoffmann
2013-10-09 10:22:44 UTC
Permalink
Hi,
Post by Gerd Hoffmann
When the guests virtual gfx card doesn't let the gpu render into a
dma-buf we have to copy the bits anyway. Ideally just memcpy from guest
framebuffer to a dma-buf (not sure drm allows that), so we can hand out
a dma-buf handle for rendering no matter whenever the guest uses virgil
or cirrus.
Oh I suppose we could do that, though it could be a bit messy as the dma-buf
stuff is pretty bound up in having a gpu device to attach through, which
you won't have when using virgil.
Did you mean "when not using virgil" ?

For the to-be-written 'drm' frontend (the qemu side for the separate gl
viewer app discussed in the other subthread) this is the way to go IMO.
When using virgil the drm frontend just hands over the drm-buf to the
viewer. When not using virgil allocate a drm-buf + copy DisplaySurface
data to it. The viewer only has to deal with a dma-buf which it can
hand over to wayland/x11 compositor for rendering, no matter what is
going on inside qemu.

Not sure how much sense that makes for the SDL frontend to do the same,
don't know SDL interfaces good enougth. When SDL can deal with
drm-buf's directly this probably is the best way, otherwise maybe not.
Post by Gerd Hoffmann
Sure it's possible. All qemu UIs want a linear framebuffer they can
operate on though. So with scatter-gather you have to copy the data
into a linear buffer in qemu memory. Without scatter-gather you can
pass a reference to guest memory to the UIs and avoid the extra copy.
The thing is for GL rendered UI there is no requirement for linear framebuffer
and actually SDL has no real requirement either, SDL2.0 pretty much says
you need to upload things to the final buffer anyways.
DisplaySurfaces in qemu have to be linear, and I'm not sure this will
ever change. The linear buffer assumption is in way to many places. So
for anything you want scanout (and thus stuff into a DisplaySurface) it
is very useful to be linear. You can use
qemu_create_displaysurface_from() then. Otherwise it is
qemu_create_displaysurface() + copy data.

In virgil mode it doesn't matter I think. The gpu will render stuff
into a dma-buf then, and if you mmap() the dma-buf I expect will be
linear in qemu's user address space no matter how it looks like in
physical memory.
But I expect we can
make virtio-vga have a feature for a linear framebuffer addition in the mmio
space if we have pci or mmio support.
virtio has feature bits (with negotiation) in the spec, no need to
reserve a mmio register for that. I'd suggest to use a lfb
unconditionally for non-3d though.
Its my attempt to write a clean device from scratch, the current virgil
codebase is a pile of hacks to qemu just so I could get to the 3D rendering
as soon as possible, so I've started a virtio-gpu device which does
proper virtio->pci->vga layering and I'm going to try and create a multi-head
gpu that has no acceleration then add the 3D accel as a feature with an
additional vq.
/me looks forward to see patches @ qemu-devel. It probably is useful to
start reviewing/merging virtio-vga before the 3d stuff is finished, to
sort multihead, to alert people something is coming, to get the ball
rolling for the other bits needed (such as vgabios).

cheers,
Gerd
David Airlie
2013-10-09 09:03:13 UTC
Permalink
----- Original Message -----
Sent: Wednesday, 9 October, 2013 6:44:58 PM
Subject: Re: [Spice-devel] RFC: Integrating Virgil and Spice
Post by Dave Airlie
Post by Gerd Hoffmann
That leaves the question how to do single-card multihead. I think the
most sensible approach here is to go the spice route, i.e. have one big
framebuffer and define scanout rectangles for the virtual monitors.
This is how real hardware works, and is also provides a natural fallback
mode for UIs not supporting scanout rectangles: They show a single
window with the whole framebuffer, simliar to old spice clients.
No real hw doesn't work like that, that is how we program real hw at the moment,
but for example wayland won't do this, and neither does Windows mostly,
real hw can have multiple separate framebuffers with separate strides,
and separate
scanouts from them, and the kms device drivers fully support this mode
of operation,
just the X server prevents it from being useable.
Ok. So scratch that idea. It's probably better to have the gfx card
register multiple QemuConsoles then (one for each virtual connector),
with some infrastructure bits in the ui core + frontends to allow
enabling/disabling them.
Yes that seems likely,
When the guests virtual gfx card doesn't let the gpu render into a
dma-buf we have to copy the bits anyway. Ideally just memcpy from guest
framebuffer to a dma-buf (not sure drm allows that), so we can hand out
a dma-buf handle for rendering no matter whenever the guest uses virgil
or cirrus.
Oh I suppose we could do that, though it could be a bit messy as the dma-buf
stuff is pretty bound up in having a gpu device to attach through, which
you won't have when using virgil.
Post by Dave Airlie
Post by Gerd Hoffmann
Second, you can have your scanout framebuffer in main memory. That
isn't a problem at all. It only needs to be contiguous in guest
physical memory, scatter-gather for the framebuffer isn't going to fly.
Scatter-gather for the framebuffer is fine as long as
its not VESA LFB. I already have virtio-vga device allocating a
non-contig framebuffer
and just using DMA operations to move data in/out.
Sure it's possible. All qemu UIs want a linear framebuffer they can
operate on though. So with scatter-gather you have to copy the data
into a linear buffer in qemu memory. Without scatter-gather you can
pass a reference to guest memory to the UIs and avoid the extra copy.
The thing is for GL rendered UI there is no requirement for linear framebuffer
and actually SDL has no real requirement either, SDL2.0 pretty much says
you need to upload things to the final buffer anyways. But I expect we can
make virtio-vga have a feature for a linear framebuffer addition in the mmio
space if we have pci or mmio support.
What is virtio-vga btw? The virgil virtual vga device or something
else?
Its my attempt to write a clean device from scratch, the current virgil
codebase is a pile of hacks to qemu just so I could get to the 3D rendering
as soon as possible, so I've started a virtio-gpu device which does
proper virtio->pci->vga layering and I'm going to try and create a multi-head
gpu that has no acceleration then add the 3D accel as a feature with an
additional vq.

Though virtio-vga is a bit behind where I'd like it to be, I just managed
to fix SDL2.0 input today so I could at least type into my VMs again.

Dave.
Dave Airlie
2013-10-08 22:40:36 UTC
Permalink
Post by Hans de Goede
I've already had a quick discussion about this with Dave Airlie, and
our ideas on this aligned perfectly.
The basic idea is to use qemu's console layer (include/ui/console.h)
as an abstraction between the new virtio-vga device Dave has in mind
(which will include optional 3D rendering capability through VIRGIL),
and various display options, ie SDL, vnc and Spice.
1) Multi head support, a question which comes up here, is do we only
add support for multiple heads on a single card, or do we also want
to support multiple cards each driving a head here ? I myself tend
to go with the KISS solution for now and only support a single
card with multiple heads.
I'm thinking it shouldn't be a major enhancement to go for
multiple-cards with multiple-heads,
I'm not sure its a worthy goal in the real world though, but it might
be nice for testing corner cases.
Post by Hans de Goede
2) The ability for a video-card generating output to pass a dma-buf
context to the display (ui in qemu terms) to get the contents from,
rather then requiring the contents to be rendered to some memory
buffer. This way we can save the quite expensive read-back from gpu
memory of the rendered result and then copying that back to the
framebuffer of the gpu for local displays (ie gtk, SDL), we would
of course still need the read back of the rendered output for
vnc / spice.
Well at the moment I'm just using SDL/GLX inside the qemu process to talk direct
to the X server, this isn't suitable long term for VMs that aren't
running directly on the
desktop,

So the longer term plan is to abstract the GLX bits away and hopefully
with SDL2.0,
use EGL to talk to the GPU device, now it could still use GLX for
local testing VMs,
but in the libvirt situation the qemu process running as the qemu user
would talk to the
new drm rendernodes via EGL, then using an EGL extension export the scanout
buffer via dma-buf (hand wavy magic not withstanding). There are some
EGL extensions
in the works for this. Then we'd just need to make the libvirt viewer
use EGL/GLX so
it can actually render the scanout buffer to the screen.
Post by Hans de Goede
For proper multi-head support in the ui layer for local displays,
we will need to use SDL-2, either by porting the current SDL ui code
to SDL-2, or by introducing a new SDL-2 ui component.
I've done an initial SDL2 port already just using ifdef :)

http://cgit.freedesktop.org/~airlied/qemu/commit/?h=virtio-gpu&id=ee44399a3dbce8da810329230f0a439a3b88cd67

however the input side of SDL changed quite a bit and it needs a bit
more work, though if people are inclined
towards a separate sdl2.c I could do that I suppose. The other reason
I wanted SDL2.0 is it supports argb cursors.

Dave.
Hans de Goede
2013-10-10 10:04:31 UTC
Permalink
Hi All,

So trying to summarize what has been discussed before:

The basic idea for virgil + spice integration is to use qemu's console
layer as an abstraction between the new virtio-vga device Dave has in
mind: http://airlied.livejournal.com/
and various display options, ie SDL, vnc and Spice.

The console layer would need some extensions for this:

1) Multi head support, this will be in the form of virtual gfx cards
registering 1 or more QemuConsoles (one for each virtual connector),
with some infrastructure bits in the ui core + frontends to allow
enabling/disabling them.

2) In order to support multi head (and argb cursors) with SDL,
qemu will get an SDL-2 ui, this will be a new ui parallel to the
existing SDL ui, note only one can be build at the same time.

3) Virgil will render using the host gpu, using EGL to talk to
a drm render node. For non local displays the rendered contents
will be read back from the gpu and then passed as a pixmap to the
ui to transport over the network

4) For local displays we want to avoid the (expensive) read back
from gpu memory, this requires passing the rendering context to
the ui. There are 2 different cases here:

4a) A pure local ui running in the qemu context, ie sdl-2 and gtk

4b) The SDL and gtk uis are only useful for short lived vms, for
a longer running vm, we want the vm to be able to run headless,
and allow the user to connect to it occasionally to view the vm's
"monitors"

5) Traditionally 4b this is done using vnc / spice over loopback,
but for Virgil we will want to add some smarts to avoid the expensive
gpu mem readback. libvirt already uses unix pipes rather then tcp
sockets when making a local connection. The plan is to use fd passing
over these pipes to give a spice-client viewing a local vm a handle
to the render context, which it can then use to directly display
the rendered frames. This will be implemented in spice-gtk, so that
all spice-clients using spice-gtk (virt-manager, virt-viewer, boxes),
automatically get support for this.


So comments or corrections anyone? Note the intend of this summary is
to serve as a basis for my talk at kvm-forum.


Regards,

Hans
Gerd Hoffmann
2013-10-10 11:25:55 UTC
Permalink
Hi,

Nice summary.
Post by Hans de Goede
3) Virgil will render using the host gpu, using EGL to talk to
a drm render node. For non local displays the rendered contents
will be read back from the gpu and then passed as a pixmap to the
ui to transport over the network
Interesting in this context: What is the status of 3d support for
qxl/spice? Is it be possible to transform virgil 3d ops into spice 3d
ops, so you could offload the rendering to spice-client? Does it make
sense to try? Or would the transfer of the data needed to render more
expensive than transferring the rendered screen?

cheers,
Gerd
Hans de Goede
2013-10-10 11:31:40 UTC
Permalink
Hi,
Post by Gerd Hoffmann
Hi,
Nice summary.
Post by Hans de Goede
3) Virgil will render using the host gpu, using EGL to talk to
a drm render node. For non local displays the rendered contents
will be read back from the gpu and then passed as a pixmap to the
ui to transport over the network
Interesting in this context: What is the status of 3d support for
qxl/spice?
Non existent AFAIK
Post by Gerd Hoffmann
Is it be possible to transform virgil 3d ops into spice 3d
ops, so you could offload the rendering to spice-client? Does it make
sense to try? Or would the transfer of the data needed to render more
expensive than transferring the rendered screen?
AFAIK, people more knowledgable people then me on 3d (ie Keith Packard)
all seem to agree on that transfering the commands to render would be
more expensive. IOW adding 3d support to Spice would be not really useful.

Regards,

Hans
Marc-André Lureau
2013-10-10 12:50:52 UTC
Permalink
----- Original Message -----
Post by Hans de Goede
Hi,
Post by Gerd Hoffmann
Hi,
Nice summary.
Post by Hans de Goede
3) Virgil will render using the host gpu, using EGL to talk to
a drm render node. For non local displays the rendered contents
will be read back from the gpu and then passed as a pixmap to the
ui to transport over the network
Interesting in this context: What is the status of 3d support for
qxl/spice?
Non existent AFAIK
Post by Gerd Hoffmann
Is it be possible to transform virgil 3d ops into spice 3d
ops, so you could offload the rendering to spice-client? Does it make
sense to try? Or would the transfer of the data needed to render more
expensive than transferring the rendered screen?
AFAIK, people more knowledgable people then me on 3d (ie Keith Packard)
all seem to agree on that transfering the commands to render would be
more expensive. IOW adding 3d support to Spice would be not really useful.
afaik, opengl has been designed originally with remote rendering in mind.

I am no opengl expert, but it probably very much depends on the kind of application (Alon reported us about Android apps remoting being fine). Wouldn't glx gears be fine too? ;) I think the upcost is pretty big in general, because of upload of textures and data arrays which are not very well compressed in raw protocol. Probably a remote protocol, like spice, could help compress those (and cache on disk!). Then result can be read back in some applications, but that is not always the case (even rendering to texture and composition could be done remotely). Usage of readback is discouraged in general. Imho, it could be worth some experiments. But current local only approach is necessary anyway, and the server rendering approach could be complementary too.
Gerd Hoffmann
2013-10-10 13:10:32 UTC
Permalink
Post by Marc-André Lureau
Post by Hans de Goede
AFAIK, people more knowledgable people then me on 3d (ie Keith Packard)
all seem to agree on that transfering the commands to render would be
more expensive. IOW adding 3d support to Spice would be not really useful.
afaik, opengl has been designed originally with remote rendering in mind.
I am no opengl expert, but it probably very much depends on the kind of application (Alon reported us about Android apps remoting being fine). Wouldn't glx gears be fine too? ;) I think the upcost is pretty big in general, because of upload of textures and data arrays which are not very well compressed in raw protocol. Probably a remote protocol, like spice, could help compress those (and cache on disk!). Then result can be read back in some applications, but that is not always the case (even rendering to texture and composition could be done remotely). Usage of readback is discouraged in general. Imho, it could be worth some experiments. But current local only approach is necessary anyway, and the server rendering approach could be complementary too.
IIRC some high-end nvidia gfx cards (which can be partitioned for
virtual machines) can encode the guests display as H.264 stream in
hardware.

Given that there are use cases for hardware assisted video encoding in
the consumer space too (beam your android tablet display to the smart tv
over wifi) I wouldn't be surprised if video encoding support is
commonplace in gpus in near future (maybe it even is there today).

That'll make sending a H.264 stream as display channel an interesting
option. Should be a reasonable efficient protocol, with the ability to
offload alot of the actual work to the gpu on both server and client
side.

cheers,
Gerd
Dave Airlie
2013-10-10 21:15:54 UTC
Permalink
Post by Gerd Hoffmann
IIRC some high-end nvidia gfx cards (which can be partitioned for
virtual machines) can encode the guests display as H.264 stream in
hardware.
Given that there are use cases for hardware assisted video encoding in
the consumer space too (beam your android tablet display to the smart tv
over wifi) I wouldn't be surprised if video encoding support is
commonplace in gpus in near future (maybe it even is there today).
That'll make sending a H.264 stream as display channel an interesting
option. Should be a reasonable efficient protocol, with the ability to
offload alot of the actual work to the gpu on both server and client
side.
I think nearly all GPUs, Intel ones included can do on-board H264 encoding now,
the vaapi for Intel exports this ability, not sure how to expose it on
non-intel GPUs,
or how they expose it under Windows etc.

The problem for us is the usual patent minefield around h264.

Dave.
Gerd Hoffmann
2013-10-11 06:47:51 UTC
Permalink
Hi,
Post by Dave Airlie
I think nearly all GPUs, Intel ones included can do on-board H264 encoding now,
the vaapi for Intel exports this ability, not sure how to expose it on
non-intel GPUs,
or how they expose it under Windows etc.
The problem for us is the usual patent minefield around h264.
Yep. Offloading the work to the hardware may get around that. IANAL
though. But not having a software fallback for patent reasons isn't
very nice. Especially on the decoding end. Encoding without hardware
support probably isn't very useful anyway.

cheers,
Gerd
Fabio Fantoni
2013-10-11 08:38:42 UTC
Permalink
Hi,
/ I think nearly all GPUs, Intel ones included can do on-board H264 encoding now,
/>/ the vaapi for Intel exports this ability, not sure how to expose it on
/>/ non-intel GPUs,
/>/ or how they expose it under Windows etc.
/>/
/>/ The problem for us is the usual patent minefield around h264.
/
Yep. Offloading the work to the hardware may get around that. IANAL
though. But not having a software fallback for patent reasons isn't
very nice. Especially on the decoding end. Encoding without hardware
support probably isn't very useful anyway.
cheers,
Gerd
What about hardware acceleration with the actual codec (mjpeg) or vp8 or
vp9 codec?
Unfortunately, from what I have seen until now cpu is overloaded without
a dedicated video codec hardware acceleration, which renders spice
client performances very poor even if on medium/high range thin client
and unusable with a low end ones.

Thanks for any reply.
Dave Airlie
2013-10-10 21:14:12 UTC
Permalink
Post by Marc-André Lureau
----- Original Message -----
Post by Hans de Goede
Hi,
Post by Gerd Hoffmann
Hi,
Nice summary.
Post by Hans de Goede
3) Virgil will render using the host gpu, using EGL to talk to
a drm render node. For non local displays the rendered contents
will be read back from the gpu and then passed as a pixmap to the
ui to transport over the network
Interesting in this context: What is the status of 3d support for
qxl/spice?
Non existent AFAIK
Post by Gerd Hoffmann
Is it be possible to transform virgil 3d ops into spice 3d
ops, so you could offload the rendering to spice-client? Does it make
sense to try? Or would the transfer of the data needed to render more
expensive than transferring the rendered screen?
AFAIK, people more knowledgable people then me on 3d (ie Keith Packard)
all seem to agree on that transfering the commands to render would be
more expensive. IOW adding 3d support to Spice would be not really useful.
afaik, opengl has been designed originally with remote rendering in mind.
OpenGL 1.0 maybe nobody has made any accommodation to remote rendering
in years, they haven't defined GLX protocol for new extensions in
probably 8-10 years,

The thing is 3D rendering is high bandwidth for anything non-trivial,
the amount of data apps move to GPUs is huge for most things.
Post by Marc-André Lureau
I am no opengl expert, but it probably very much depends on the kind of application (Alon reported us about Android apps remoting being fine). Wouldn't glx gears be fine too? ;) I think the upcost is pretty big in general, because of upload of textures and data arrays which are not very well compressed in raw protocol. Probably a remote protocol, like spice, could help compress those (and cache on disk!). Then result can be read back in some applications, but that is not always the case (even rendering to texture and composition could be done remotely). Usage of readback is discouraged in general. Imho, it could be worth some experiments. But current local only approach is necessary anyway, and the server rendering approach could be complementary too.
There are numerous massive problems with remote rendering the android
remoting guys don't solve because they either don't need to or they
are too hard, but for a desktop you'd have to. I'll enumerate them for
posterity :-) Also your GL knowledge is a bit out of date :-P

1) readback - spice currently doesn't do remote readback it always
does pixman rendering locally when the client reads something back,
now GL isn't pixel perfect and if we have different rendering hw or sw
rendering on the host and the client then we'd be giving back results
that weren't entirely accurate. Now do we just readback from the
client then? probably going to suck. Now things like gnome-shell use a
technique called picking on mouse movement and clicks, every mouse
movement can cause a readback so it know what object is under the
mouse, can you say latency enough?

2) disconnected operation - if we are remoting to a 3D client, what do
we do when it disconnects? fallback to local sw rendering? fallback to
local hw rendering? GL isn't pixel perfect so the results may not be
the same as we get on the old remote, so should we be reading back
from the clients when they've rendered to have accurate results., GL
also has insane number of versions and extensions, do we block apps
from using higher levels because we might get disconnected?

So it might be possible if you had same 3D hw in the server and guest
to do something, where'd you double render everything, and keep some
sort of transaction log of what the client has finished with so the
server could throw it away, but it would be a lot of work, the android
guys afaik don't do readback and just kill the app on disconnect.

This is of course on top of the upcost you mention, which is vast for
most apps, not so bad for gnome-shell.

The on gpu encoding seems to be what most of the solutions in the area
are doing, vmware, nvidia grid etc, the GPU streams the rendered
output through the h264 encoder so the cpu has a lot less to readback
and transfer over the network, it would be what we'd like to pursue
but the usual patent issues around h264 means we might never get
there. I think we are hoping that something like Daala can be used,
and it possibly has better support for straight edges and text. But
that is a lot more long term than getting virgil and possible some
spice integration.

Dave.
Marc-André Lureau
2013-10-10 21:48:51 UTC
Permalink
----- Original Message -----
Post by Dave Airlie
Post by Marc-André Lureau
----- Original Message -----
Post by Hans de Goede
Hi,
Post by Gerd Hoffmann
Hi,
Nice summary.
Post by Hans de Goede
3) Virgil will render using the host gpu, using EGL to talk to
a drm render node. For non local displays the rendered contents
will be read back from the gpu and then passed as a pixmap to the
ui to transport over the network
Interesting in this context: What is the status of 3d support for
qxl/spice?
Non existent AFAIK
Post by Gerd Hoffmann
Is it be possible to transform virgil 3d ops into spice 3d
ops, so you could offload the rendering to spice-client? Does it make
sense to try? Or would the transfer of the data needed to render more
expensive than transferring the rendered screen?
AFAIK, people more knowledgable people then me on 3d (ie Keith Packard)
all seem to agree on that transfering the commands to render would be
more expensive. IOW adding 3d support to Spice would be not really useful.
afaik, opengl has been designed originally with remote rendering in mind.
OpenGL 1.0 maybe nobody has made any accommodation to remote rendering
in years, they haven't defined GLX protocol for new extensions in
probably 8-10 years,
The thing is 3D rendering is high bandwidth for anything non-trivial,
the amount of data apps move to GPUs is huge for most things.
Most opengl applications, but perhaps that's not so true for desktop apps in general (including aero etc), as you noted for gnome-shell, which is more animated than most applications that just want simple smooth transitions. I am not looking at remote gaming or 3d benchmark.

Even when a lot of data moves to GPU, I wonder how much can actually be cached (ie how much is generated on CPU).
Post by Dave Airlie
Post by Marc-André Lureau
I am no opengl expert, but it probably very much depends on the kind of
application (Alon reported us about Android apps remoting being fine).
Wouldn't glx gears be fine too? ;) I think the upcost is pretty big in
general, because of upload of textures and data arrays which are not very
well compressed in raw protocol. Probably a remote protocol, like spice,
could help compress those (and cache on disk!). Then result can be read
back in some applications, but that is not always the case (even rendering
to texture and composition could be done remotely). Usage of readback is
discouraged in general. Imho, it could be worth some experiments. But
current local only approach is necessary anyway, and the server rendering
approach could be complementary too.
There are numerous massive problems with remote rendering the android
remoting guys don't solve because they either don't need to or they
are too hard, but for a desktop you'd have to. I'll enumerate them for
posterity :-) Also your GL knowledge is a bit out of date :-P
1) readback - spice currently doesn't do remote readback it always
does pixman rendering locally when the client reads something back,
now GL isn't pixel perfect and if we have different rendering hw or sw
rendering on the host and the client then we'd be giving back results
that weren't entirely accurate. Now do we just readback from the
client then? probably going to suck. Now things like gnome-shell use a
technique called picking on mouse movement and clicks, every mouse
movement can cause a readback so it know what object is under the
mouse, can you say latency enough?
This is already in opengl 1.0 iirc (so probably with remoting in mind ;), and I don't think latency for picking matters so much in remote environment (even then, when there is a mouse click/event on client side, we could already gather some picking information perhaps, to avoid round trip), I was also imagining the double rendering that you mentionned.
Post by Dave Airlie
2) disconnected operation - if we are remoting to a 3D client, what do
we do when it disconnects? fallback to local sw rendering? fallback to
local hw rendering? GL isn't pixel perfect so the results may not be
the same as we get on the old remote, so should we be reading back
from the clients when they've rendered to have accurate results., GL
also has insane number of versions and extensions, do we block apps
from using higher levels because we might get disconnected?
So it might be possible if you had same 3D hw in the server and guest
to do something, where'd you double render everything, and keep some
sort of transaction log of what the client has finished with so the
server could throw it away, but it would be a lot of work, the android
guys afaik don't do readback and just kill the app on disconnect.
This is of course on top of the upcost you mention, which is vast for
most apps, not so bad for gnome-shell.
The on gpu encoding seems to be what most of the solutions in the area
are doing, vmware, nvidia grid etc, the GPU streams the rendered
output through the h264 encoder so the cpu has a lot less to readback
and transfer over the network, it would be what we'd like to pursue
but the usual patent issues around h264 means we might never get
there. I think we are hoping that something like Daala can be used,
and it possibly has better support for straight edges and text. But
that is a lot more long term than getting virgil and possible some
spice integration.
Dave.
_______________________________________________
Spice-devel mailing list
http://lists.freedesktop.org/mailman/listinfo/spice-devel
Marc-André Lureau
2013-10-10 22:02:11 UTC
Permalink
----- Original Message -----
Post by Marc-André Lureau
----- Original Message -----
Post by Dave Airlie
Post by Marc-André Lureau
----- Original Message -----
Post by Hans de Goede
Hi,
Post by Gerd Hoffmann
Hi,
Nice summary.
Post by Hans de Goede
3) Virgil will render using the host gpu, using EGL to talk to
a drm render node. For non local displays the rendered contents
will be read back from the gpu and then passed as a pixmap to the
ui to transport over the network
Interesting in this context: What is the status of 3d support for
qxl/spice?
Non existent AFAIK
Post by Gerd Hoffmann
Is it be possible to transform virgil 3d ops into spice 3d
ops, so you could offload the rendering to spice-client? Does it make
sense to try? Or would the transfer of the data needed to render more
expensive than transferring the rendered screen?
AFAIK, people more knowledgable people then me on 3d (ie Keith Packard)
all seem to agree on that transfering the commands to render would be
more expensive. IOW adding 3d support to Spice would be not really useful.
afaik, opengl has been designed originally with remote rendering in mind.
OpenGL 1.0 maybe nobody has made any accommodation to remote rendering
in years, they haven't defined GLX protocol for new extensions in
probably 8-10 years,
The thing is 3D rendering is high bandwidth for anything non-trivial,
the amount of data apps move to GPUs is huge for most things.
Most opengl applications, but perhaps that's not so true for desktop apps in
general (including aero etc), as you noted for gnome-shell, which is more
animated than most applications that just want simple smooth transitions. I
am not looking at remote gaming or 3d benchmark.
Even when a lot of data moves to GPU, I wonder how much can actually be
cached (ie how much is generated on CPU).
Post by Dave Airlie
Post by Marc-André Lureau
I am no opengl expert, but it probably very much depends on the kind of
application (Alon reported us about Android apps remoting being fine).
Wouldn't glx gears be fine too? ;) I think the upcost is pretty big in
general, because of upload of textures and data arrays which are not very
well compressed in raw protocol. Probably a remote protocol, like spice,
could help compress those (and cache on disk!). Then result can be read
back in some applications, but that is not always the case (even rendering
to texture and composition could be done remotely). Usage of readback is
discouraged in general. Imho, it could be worth some experiments. But
current local only approach is necessary anyway, and the server rendering
approach could be complementary too.
There are numerous massive problems with remote rendering the android
remoting guys don't solve because they either don't need to or they
are too hard, but for a desktop you'd have to. I'll enumerate them for
posterity :-) Also your GL knowledge is a bit out of date :-P
1) readback - spice currently doesn't do remote readback it always
does pixman rendering locally when the client reads something back,
now GL isn't pixel perfect and if we have different rendering hw or sw
rendering on the host and the client then we'd be giving back results
that weren't entirely accurate. Now do we just readback from the
client then? probably going to suck. Now things like gnome-shell use a
technique called picking on mouse movement and clicks, every mouse
movement can cause a readback so it know what object is under the
mouse, can you say latency enough?
This is already in opengl 1.0 iirc (so probably with remoting in mind ;), and
I don't think latency for picking matters so much in remote environment
(even then, when there is a mouse click/event on client side, we could
already gather some picking information perhaps, to avoid round trip), I was
also imagining the double rendering that you mentionned.
If there is enough interest, we could measure this running a VirtualBox Win7+some desktop applications with aero, run apitrace under it (that should be possible), and gather some statistics (with some compression tools etc). That could give a rough estimate of bandwidth.
Dave Airlie
2013-10-10 23:31:24 UTC
Permalink
Post by Marc-André Lureau
Post by Dave Airlie
OpenGL 1.0 maybe nobody has made any accommodation to remote rendering
in years, they haven't defined GLX protocol for new extensions in
probably 8-10 years,
The thing is 3D rendering is high bandwidth for anything non-trivial,
the amount of data apps move to GPUs is huge for most things.
Most opengl applications, but perhaps that's not so true for desktop apps in general (including aero etc), as you noted for gnome-shell, which is more animated than most applications that just want simple smooth transitions. I am not looking at remote gaming or 3d benchmark.
Even when a lot of data moves to GPU, I wonder how much can actually be cached (ie how much is generated on CPU).
A lot of our desktop apps however are going to GL, like libreoffice
and firefox and I expect they'll be generating a fair amount of data,
not as much as games or benchmarks but still enough.
Post by Marc-André Lureau
Post by Dave Airlie
There are numerous massive problems with remote rendering the android
remoting guys don't solve because they either don't need to or they
are too hard, but for a desktop you'd have to. I'll enumerate them for
posterity :-) Also your GL knowledge is a bit out of date :-P
1) readback - spice currently doesn't do remote readback it always
does pixman rendering locally when the client reads something back,
now GL isn't pixel perfect and if we have different rendering hw or sw
rendering on the host and the client then we'd be giving back results
that weren't entirely accurate. Now do we just readback from the
client then? probably going to suck. Now things like gnome-shell use a
technique called picking on mouse movement and clicks, every mouse
movement can cause a readback so it know what object is under the
mouse, can you say latency enough?
This is already in opengl 1.0 iirc (so probably with remoting in mind ;), and I don't think latency for picking matters so much in remote environment (even then, when there is a mouse click/event on client side, we could already gather some picking information perhaps, to avoid round trip), I was also imagining the double rendering that you mentionned.
No you are thinking about GL 1.0 GL_SELECT this isn't what is used
anymore, gnome-shell renders a backbuffer with colors and then does a
glReadPixels on it, there are optimisation to aviod this in some cases
but in any complex scene its probably the only useful way. The latency
is every mouse movement is going to generate another round drop for
the readpixels of the place it lands, that is going to be sluggish.

Dave.
Dave Airlie
2013-10-17 18:34:50 UTC
Permalink
Post by Hans de Goede
Hi All,
The basic idea for virgil + spice integration is to use qemu's console
layer as an abstraction between the new virtio-vga device Dave has in
mind: http://airlied.livejournal.com/
and various display options, ie SDL, vnc and Spice.
1) Multi head support, this will be in the form of virtual gfx cards
registering 1 or more QemuConsoles (one for each virtual connector),
with some infrastructure bits in the ui core + frontends to allow
enabling/disabling them.
Okay I've done some of this work on ui in my virtio-gpu branch of my qemu repo

http://cgit.freedesktop.org/~airlied/qemu/log/?h=virtio-gpu

I've added multiple display surfaces to a QemuConsole and add an idx
parameter in a couple of places,

Then I've hooked it up to SDL2, this lets me with my virtio-gpu to
create a screen panning two windows.

Dave.
Gerd Hoffmann
2013-10-18 10:56:26 UTC
Permalink
Post by Dave Airlie
Okay I've done some of this work on ui in my virtio-gpu branch of my qemu repo
http://cgit.freedesktop.org/~airlied/qemu/log/?h=virtio-gpu
I've added multiple display surfaces to a QemuConsole and add an idx
parameter in a couple of places,
/me looks at bf9a3b69c80a6fbd289b6340b8bdc9e994630bdc, console.c
changes.

This isn't what I've meant, guess I wasn't verbose enough ..

My idea is to simply call graphic_console_init() multiple times in
virtio-vga, so you get multiple QemuConsoles.

cheers,
Gerd
Dave Airlie
2013-10-20 12:32:53 UTC
Permalink
Post by Gerd Hoffmann
/me looks at bf9a3b69c80a6fbd289b6340b8bdc9e994630bdc, console.c
changes.
This isn't what I've meant, guess I wasn't verbose enough ..
Yeah I'mt not sure I liked that idea as you seem to associate a
QemuConsole as being a distinct console,
whereas in this case the multiple heads aren't distinct consoles.
While it might work I feel you'll then have
to add some extra things make the frontends understand the difference
between multi-card and multi-head.

Though maybe that doesn't matter, I'll probably try it your way once
I've hacked it up a bit more.

Dave.

Loading...