-
Notifications
You must be signed in to change notification settings - Fork 292
DMA Descriptors Lists are reused with dma_descriptors_impl!
#2993
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I don't know if we should count this as a bug in the code, but maybe in the documentation. You are not really supposed to call that macro in a loop - it's a convenience solution for less special use cases. You should probably create one big list of descriptors and manually sort them out into chunks with the size you need, in your loop. |
Fair enough, I don't think there is any documentation/examples on using DMA buffer/descriptors without hitting this macro; so multiple calls to For now I'll try creating them manually to resolve my current roadblock. |
You should be able to use the macro multiple times, but the way let (rx_descriptors, tx_descriptors) = esp_hal::dma_descriptors!(RX_SIZE, TX_SIZE);
let (other_rx_desc, other_tx_desc) = esp_hal::dma_descriptors!(RX_SIZE, TX_SIZE);
// These are not the same set of descriptors
core::assert_ne!(other_rx_desc.as_ptr(), rx_descriptors.as_ptr());
core::assert_ne!(other_tx_desc.as_ptr(), tx_descriptors.as_ptr()); |
As the macro is doing |
Ah I see, yeah I was confused with how the macros resolve, that does make sense then. It would be nice if it's more natural to use. const-generics + 'mkstatic' do seem like a potential solution, haven't gotten to fixing my example so maybe I'll do a proposal to add to examples. |
A const generic and macro-less solution would be blocked on rust-lang/rust#76560 as we need to do maths. We might have to just use |
I ended up with the following solution (warning lengthy): const DMA_BUFFER_SIZE: usize = WIDTH * HEIGHT * core::mem::size_of::<Rgb565>();
const DMA_ALIGNMENT: ExternalBurstConfig = ExternalBurstConfig::Size64;
const DMA_CHUNK_SIZE: usize = 4096 - DMA_ALIGNMENT as usize;
const DMA_SUPERCHUNK_SIZE: usize = 2 * 8192usize;
const DMA_SUPERCHUNK_COUNT: usize = DMA_BUFFER_SIZE.div_ceil(DMA_SUPERCHUNK_SIZE);
const SUPERCHUNK_ALIGN: usize = DMA_SUPERCHUNK_SIZE / core::mem::size_of::<Rgb565>();
const DESCRIPTOR_COUNT: usize =
esp_hal::dma::descriptor_count(DMA_SUPERCHUNK_SIZE, DMA_CHUNK_SIZE, false);
const MULTI_DESC_COUNT: usize = DESCRIPTOR_COUNT * DMA_SUPERCHUNK_COUNT;
pub struct DmaDescriptors<const N: usize> {
descriptors: [DmaDescriptor; N],
}
impl<const N: usize> DmaDescriptors<N> {
pub fn new() -> Self {
Self {
descriptors: [DmaDescriptor::EMPTY; N],
}
}
pub fn chunk_mut(&mut self, chunk_size: usize) -> ChunksMut<'_, DmaDescriptor> {
self.descriptors.chunks_mut(chunk_size)
}
}
// SAFETY: This allocation will not be dropped
pub unsafe fn dma_alloc_buffer(size: usize, align: usize) -> &'static mut [u8] {
let layout = core::alloc::Layout::from_size_align(size, align).unwrap();
unsafe {
let ptr = alloc::alloc::alloc(layout);
if ptr.is_null() {
defmt::error!("dma_alloc_buffer: alloc failed");
alloc::alloc::handle_alloc_error(layout);
}
core::slice::from_raw_parts_mut(ptr, size)
}
}
struct DmaFrameBuffer {
buffers: [Option<DmaTxBuf>; DMA_SUPERCHUNK_COUNT],
}
impl DmaFrameBuffer {
fn new() -> Self {
let tx_desc = mk_static!(
DmaDescriptors<MULTI_DESC_COUNT>,
DmaDescriptors::<MULTI_DESC_COUNT>::new()
);
let mut buffers = {
// Create an array of uninitialized values.
let mut array: [Option<DmaTxBuf>; DMA_SUPERCHUNK_COUNT] =
[const { None }; DMA_SUPERCHUNK_COUNT];
for (element, tx_desc) in array.iter_mut().zip(tx_desc.chunk_mut(DESCRIPTOR_COUNT)) {
defmt::warn!("Descriptor head: {}", tx_desc.as_mut_ptr());
let tx_buffer = unsafe { dma_alloc_buffer(DMA_SUPERCHUNK_SIZE, DMA_ALIGNMENT as usize) };
let display_buf = DmaTxBuf::new_with_config(tx_desc, tx_buffer, DMA_ALIGNMENT).unwrap();
element.replace(display_buf);
}
array
};
// Truncate last buffer to be exactly our desired buffer size
defmt::info!("DmaFrameBuffer: chunk count: {}", buffers.len());
defmt::info!(
"Dma Buffer size: {}, expected to be higher than {}",
buffers.len() * DMA_SUPERCHUNK_SIZE,
DMA_BUFFER_SIZE
);
defmt::debug!("First dmabuffer: {:?}", buffers[0].as_mut());
defmt::debug!("End dmabuffer: {:?}", buffers[buffers.len() - 1].as_mut());
buffers[buffers.len() - 1]
.as_mut()
.unwrap()
.set_length(DMA_BUFFER_SIZE % DMA_SUPERCHUNK_SIZE);
DmaFrameBuffer { buffers }
}
#[inline]
#[clippy::has_significant_drop]
fn view(&mut self) -> DmaFrameBufferView<'_> {
DmaFrameBufferView {
buffers: self
.buffers
.iter_mut()
.map(|buffer| {
let buffer = buffer.as_mut().unwrap();
let (head, buf, tail) = unsafe { buffer.as_mut_slice().align_to_mut::<Rgb565>() };
assert!(head.is_empty());
assert!(tail.is_empty());
buf
})
.collect(),
}
}
#[inline]
fn get_mut_raw(&mut self, index: usize) -> &mut [u8] {
let buf_idx = index % SUPERCHUNK_ALIGN;
&mut self.buffers[index / SUPERCHUNK_ALIGN]
.as_mut()
.unwrap()
.as_mut_slice()[buf_idx..buf_idx + 2]
}
#[inline]
fn get_mut(&mut self, index: usize) -> &mut Rgb565 {
let (head, buf, tail) = unsafe {
self.buffers[index / SUPERCHUNK_ALIGN]
.as_mut()
.unwrap()
.as_mut_slice()
.align_to_mut::<Rgb565>()
};
assert!(head.is_empty());
assert!(tail.is_empty());
unsafe { buf.get_unchecked_mut(index % SUPERCHUNK_ALIGN) }
}
fn chunk_iter(&mut self) -> DmaFrameBufferChunkIter<'_> {
DmaFrameBufferChunkIter {
buffers: self,
resolved: true,
index: 0,
}
}
}
struct DmaFrameBufferChunkIter<'a> {
buffers: &'a mut DmaFrameBuffer,
resolved: bool,
index: usize,
}
impl Drop for DmaFrameBufferChunkIter<'_> {
fn drop(&mut self) {
defmt::assert!(
self.resolved,
"Dropped DmaFrameBufferChunkIter, but last buffer wasn't resolved",
);
}
}
impl DmaFrameBufferChunkIter<'_> {
pub fn get_index(&mut self) -> usize {
self.index
}
pub fn next(&mut self) -> Option<DmaTxBuf> {
defmt::assert!(
self.resolved,
"Requested next buffer before resolving previous; DmaFrameBuffer poisoned"
);
let buf = self.buffers.buffers.get_mut(self.index)?.take();
self.resolved = false;
buf
}
pub fn put_back(&mut self, buf: DmaTxBuf) {
defmt::assert!(
!self.resolved,
"Tried to resolved dma buffer twice; DmaFrameBuffer poisoned"
);
self
.buffers
.buffers
.get_mut(self.index)
.unwrap()
.replace(buf);
self.index += 1;
self.resolved = true;
}
}
struct DmaFrameBufferView<'a> {
buffers: alloc::vec::Vec<&'a mut [Rgb565]>,
}
struct DmaFrameBufferViewIter<'a> {
view: &'a mut DmaFrameBufferView<'a>,
index: usize,
}
impl<'a> Iterator for DmaFrameBufferViewIter<'a> {
type Item = &'a mut Rgb565;
fn next(&mut self) -> Option<Self::Item> {
if self.index >= DMA_BUFFER_SIZE / core::mem::size_of::<Rgb565>() {
None
} else {
// SAFETY: unsafe aliasing here: cross-referenced with `std`;
// since we hold a mutable DmaFrameBufferView ref we should be okay
// TODO(perf): this could be faster if we just hold the pointers and add one until overflow
let item = unsafe { self.view.get_mut_ptr(self.index).as_mut() };
self.index += 1;
item
}
}
}
impl<'a> DmaFrameBufferView<'a> {
#[inline]
pub fn get_mut(&mut self, index: usize) -> Option<&mut Rgb565> {
self
.buffers
.get_mut(index / SUPERCHUNK_ALIGN)?
.get_mut(index % SUPERCHUNK_ALIGN)
}
#[inline]
pub unsafe fn get_mut_unchecked(&mut self, index: usize) -> &mut Rgb565 {
self
.buffers
.get_unchecked_mut(index / SUPERCHUNK_ALIGN)
.get_unchecked_mut(index % SUPERCHUNK_ALIGN)
}
#[inline]
pub unsafe fn get_mut_ptr(&mut self, index: usize) -> *mut Rgb565 {
self
.buffers
.get_unchecked_mut(index / SUPERCHUNK_ALIGN)
.get_unchecked_mut(index % SUPERCHUNK_ALIGN)
}
pub fn iter_mut(&'a mut self) -> DmaFrameBufferViewIter<'a> {
DmaFrameBufferViewIter {
view: self,
index: 0,
}
}
} Usage: {
let mut iter = self.display_buf.chunk_iter();
let mut offset = 0u32;
loop {
let chunk = iter.next();
if chunk.is_none() {
break;
}
let mut chunk = chunk.unwrap();
let chunk_len = chunk.len();
cs.set_low();
delay.delay_micros(100);
let transfer = match //spidma.write(chunk_len, chunk)
spidma.half_duplex_write(
mode,
Command::None,
Address::_24Bit(addr_pixels + offset, DataMode::Single),
0u8,
chunk_len,
chunk,
)
{
Ok(v) => v,
Err((e, spi, buf)) => {
cs.set_high();
// Going back to regular buffered spi
let spibus = spi.with_buffers(rx_buf, tx_buf);
....
iter.put_back(buf);
return Err(anyhow::anyhow!("Could not fill screen: {:?}", e));
}
};
(spidma, chunk) = transfer.wait();
offset += chunk_len as u32;
delay.delay_micros(100);
cs.set_high();
iter.put_back(chunk);
delay.delay_micros(100);
}
} TL;DR: struct DmaFrameBuffer {
buffers: [Option<DmaTxBuf>; DMA_SUPERCHUNK_COUNT],
}
impl DmaFrameBuffer {
fn new() -> Self {
let tx_desc = mk_static!(
DmaDescriptors<MULTI_DESC_COUNT>,
DmaDescriptors::<MULTI_DESC_COUNT>::new()
);
let mut buffers = {
// Create an array of uninitialized values.
let mut array: [Option<DmaTxBuf>; DMA_SUPERCHUNK_COUNT] =
[const { None }; DMA_SUPERCHUNK_COUNT];
for (element, tx_desc) in array.iter_mut().zip(tx_desc.chunk_mut(DESCRIPTOR_COUNT)) {
defmt::warn!("Descriptor head: {}", tx_desc.as_mut_ptr());
let tx_buffer = unsafe { dma_alloc_buffer(DMA_SUPERCHUNK_SIZE, DMA_ALIGNMENT as usize) };
let display_buf = DmaTxBuf::new_with_config(tx_desc, tx_buffer, DMA_ALIGNMENT).unwrap();
element.replace(display_buf);
}
array
};
buffers
} Then claiming and placing back the buffers as the transfer progresses. It's not very ergonomic but it functions. |
Uh oh!
There was an error while loading. Please reload this page.
Bug description
It seems like
dma_descriptors_impl!
re-uses the sameDESCRIPTORS
reference (esp-hal/esp-hal/src/dma/mod.rs
Line 703 in 2105ba0
It could be that this is intended but I don't see an obvious way to get to multiple descriptor lists.
To Reproduce
Initialize an array of descriptors like so
And observe that the log shows the same pointer for each descriptor:
Expected behavior
Each call to
dma_descriptors_chunk_size
makes a new descriptor list.Environment
The text was updated successfully, but these errors were encountered: