Skip to content

binder: Add support for transferring file descriptors. #371

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 11, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 16 additions & 3 deletions drivers/android/allocation.rs
Original file line number Diff line number Diff line change
@@ -1,14 +1,17 @@
// SPDX-License-Identifier: GPL-2.0

use alloc::sync::Arc;
use core::mem::{size_of, MaybeUninit};
use kernel::{bindings, pages::Pages, prelude::*, user_ptr::UserSlicePtrReader, Error};
use alloc::{boxed::Box, sync::Arc};
use core::mem::{replace, size_of, MaybeUninit};
use kernel::{
bindings, linked_list::List, pages::Pages, prelude::*, user_ptr::UserSlicePtrReader, Error,
};

use crate::{
defs::*,
node::NodeRef,
process::{AllocationInfo, Process},
thread::{BinderError, BinderResult},
transaction::FileInfo,
};

pub(crate) struct Allocation<'a> {
Expand All @@ -19,6 +22,7 @@ pub(crate) struct Allocation<'a> {
pub(crate) process: &'a Process,
allocation_info: Option<AllocationInfo>,
free_on_drop: bool,
file_list: List<Box<FileInfo>>,
}

impl<'a> Allocation<'a> {
Expand All @@ -37,9 +41,18 @@ impl<'a> Allocation<'a> {
pages,
allocation_info: None,
free_on_drop: true,
file_list: List::new(),
}
}

pub(crate) fn take_file_list(&mut self) -> List<Box<FileInfo>> {
replace(&mut self.file_list, List::new())
}

pub(crate) fn add_file_info(&mut self, file: Box<FileInfo>) {
self.file_list.push_back(file);
}

fn iterate<T>(&self, mut offset: usize, mut size: usize, mut cb: T) -> Result
where
T: FnMut(&Pages<0>, usize, usize) -> Result,
Expand Down
30 changes: 20 additions & 10 deletions drivers/android/thread.rs
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0

use alloc::sync::Arc;
use alloc::{boxed::Box, sync::Arc};
use core::{alloc::AllocError, mem::size_of, pin::Pin};
use kernel::{
bindings,
Expand All @@ -19,7 +19,7 @@ use crate::{
defs::*,
process::{AllocationInfo, Process},
ptr_align,
transaction::Transaction,
transaction::{FileInfo, Transaction},
DeliverCode, DeliverToRead, DeliverToReadListAdapter, Either,
};

Expand Down Expand Up @@ -376,7 +376,7 @@ impl Thread {
fn translate_object(
&self,
index_offset: usize,
view: &AllocationView,
view: &mut AllocationView,
allow_fds: bool,
) -> BinderResult {
let offset = view.alloc.read(index_offset)?;
Expand All @@ -386,7 +386,8 @@ impl Thread {
BINDER_TYPE_WEAK_BINDER | BINDER_TYPE_BINDER => {
let strong = header.type_ == BINDER_TYPE_BINDER;
view.transfer_binder_object(offset, strong, |obj| {
// SAFETY: The type is `BINDER_TYPE_{WEAK_}BINDER`, so `binder` is populated.
// SAFETY: `binder` is a `binder_uintptr_t`; any bit pattern is a valid
// representation.
let ptr = unsafe { obj.__bindgen_anon_1.binder } as _;
let cookie = obj.cookie as _;
let flags = obj.flags as _;
Expand All @@ -398,7 +399,7 @@ impl Thread {
BINDER_TYPE_WEAK_HANDLE | BINDER_TYPE_HANDLE => {
let strong = header.type_ == BINDER_TYPE_HANDLE;
view.transfer_binder_object(offset, strong, |obj| {
// SAFETY: The type is `BINDER_TYPE_{WEAK_}HANDLE`, so `handle` is populated.
// SAFETY: `handle` is a `u32`; any bit pattern is a valid representation.
let handle = unsafe { obj.__bindgen_anon_1.handle } as _;
self.process.get_node_from_handle(handle, strong)
})?;
Expand All @@ -407,6 +408,15 @@ impl Thread {
if !allow_fds {
return Err(BinderError::new_failed());
}

let obj = view.read::<bindings::binder_fd_object>(offset)?;
// SAFETY: `fd` is a `u32`; any bit pattern is a valid representation.
let fd = unsafe { obj.__bindgen_anon_1.fd };
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To nitpick: While this is indeed safe, I don't think the reason is correct. The program using binder doesn't have to populate fd. It is not UB to read it however, as from the perspective of the kernel all userspace data is defined.

Copy link
Member

@ojeda ojeda Jun 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm... I am confused here.

From a quick look at the reference, it says Rust unions do not have an active member (unlike C++), i.e. bits are always re-interpreted. It also says reading uninitialized memory from unions is OK, so that is not a concern either. Thus, for u32 etc. it would seem it is always safe to read from it since there are no invalid values possible.

So two questions:

  1. Why does Rust not simply mark the operation as safe? Is it because they did not want to make the operation conditionally-safe depending on the type?
  2. If it is always safe, what do you mean when you say the reason is "as from the perspective of the kernel all userspace data is defined"?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does Rust not simply mark the operation as safe? Is it because they did not want to make the operation conditionally-safe depending on the type?

Imagine the following union

union TransmuteToBox {
    num: usize,
    the_box: Box<u8>,
}

If reading an union variant was safe TransmuteToBox { num: 0 }.the_box would give a dangling box, which is UB.

What the reference is saying is that if the byte representation of the union is valid for an union variant, then reading it as such union variant is safe. This is unlike C where writing as one variant and then reading it as another variant is UB even if both variants have the same set of allowed byte representations. For example the following would be UB in C:

union TransmuteU32ToFourU8 {
    int: u32,
    array: [u8; 4],
}

TransmuteU32ToFourU8 { int: 0 }.array

despite the fact that all 0s is a valid byte representation for [u8; 4].

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it is always safe, what do you mean when you say the reason is "as from the perspective of the kernel all userspace data is defined"?

Even if the data is undefined inside the program that called the ioctl, the compiler of the kernel has no way to exploit this fact as it doesn't know.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was under the impression that Rust had an active member. But given that isn't the case, it seems like the justifications should just be about the representation of the field being used. From the reference:

It is the programmer's responsibility to make sure that the data is valid at the field's type. Failing to do so results in undefined behavior. For example, reading the value 3 through of a field of the boolean type is undefined behavior.

Would something like this be reasonable to you all?

// SAFETY: `fd` is a `u32`; any bit pattern is a valid representation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

copy_from_userspace is written in assembly AFAICT, which is opaque, so a C or Rust compiler has to assume that it initializes the full copy target. If the compiler was able to see that the userspace leaves it unitialized, that would make it impossible to write a kernel that can't be made to invoke UB by a malicious program.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in this case it's shared memory rather than copying from userspace, but still there is no possible way to let the "uninitialized" property across process or kernel/userspace boundary.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is shared writable memory, you will have to use atomics, as otherwise you could have a data race, which is UB. The compiler could reload the value and expect it to still be the same if atomics are not used.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

view.read already copies it :) it's just that this is copied directly in kernel space rather than from userspace.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The memory is indeed shared with userspace, but it (userspace) can only read from it. It is populated by copying from another userspace process to the kernel mapping (which is writable); while it is being copied, the receiving end (the one that has it mapped readonly) treats it as unallocated memory, so they don't touch it.

let file = File::from_fd(fd)?;
let field_offset =
kernel::offset_of!(bindings::binder_fd_object, __bindgen_anon_1.fd) as usize;
let file_info = Box::try_new(FileInfo::new(file, offset + field_offset))?;
view.alloc.add_file_info(file_info);
}
_ => pr_warn!("Unsupported binder object type: {:x}\n", header.type_),
}
Expand All @@ -420,9 +430,9 @@ impl Thread {
end: usize,
allow_fds: bool,
) -> BinderResult {
let view = AllocationView::new(alloc, start);
let mut view = AllocationView::new(alloc, start);
for i in (start..end).step_by(size_of::<usize>()) {
if let Err(err) = self.translate_object(i, &view, allow_fds) {
if let Err(err) = self.translate_object(i, &mut view, allow_fds) {
alloc.set_info(AllocationInfo { offsets: start..i });
return Err(err);
}
Expand Down Expand Up @@ -558,7 +568,7 @@ impl Thread {
let completion = Arc::try_new(DeliverCode::new(BR_TRANSACTION_COMPLETE))?;
let process = orig.from.process.clone();
let allow_fds = orig.flags & TF_ACCEPT_FDS != 0;
let reply = Arc::try_new(Transaction::new_reply(self, process, tr, allow_fds)?)?;
let reply = Transaction::new_reply(self, process, tr, allow_fds)?;
self.inner.lock().push_work(completion);
orig.from.deliver_reply(Either::Left(reply), &orig);
Ok(())
Expand Down Expand Up @@ -592,7 +602,7 @@ impl Thread {
let handle = unsafe { tr.target.handle };
let node_ref = self.process.get_transaction_node(handle)?;
let completion = Arc::try_new(DeliverCode::new(BR_TRANSACTION_COMPLETE))?;
let transaction = Arc::try_new(Transaction::new(node_ref, None, self, tr)?)?;
let transaction = Transaction::new(node_ref, None, self, tr)?;
self.inner.lock().push_work(completion);
// TODO: Remove the completion on error?
transaction.submit()?;
Expand All @@ -606,7 +616,7 @@ impl Thread {
// could this happen?
let top = self.top_of_transaction_stack()?;
let completion = Arc::try_new(DeliverCode::new(BR_TRANSACTION_COMPLETE))?;
let transaction = Arc::try_new(Transaction::new(node_ref, top, self, tr)?)?;
let transaction = Transaction::new(node_ref, top, self, tr)?;

// Check that the transaction stack hasn't changed while the lock was released, then update
// it with the new transaction.
Expand Down
155 changes: 137 additions & 18 deletions drivers/android/transaction.rs
Original file line number Diff line number Diff line change
@@ -1,10 +1,20 @@
// SPDX-License-Identifier: GPL-2.0

use alloc::sync::Arc;
use core::sync::atomic::{AtomicBool, Ordering};
use alloc::{boxed::Box, sync::Arc};
use core::{
pin::Pin,
sync::atomic::{AtomicBool, Ordering},
};
use kernel::{
io_buffer::IoBufferWriter, linked_list::Links, prelude::*, sync::Ref,
user_ptr::UserSlicePtrWriter, ScopeGuard,
bindings,
file::{File, FileDescriptorReservation},
io_buffer::IoBufferWriter,
linked_list::List,
linked_list::{GetLinks, Links},
prelude::*,
sync::{Ref, SpinLock},
user_ptr::UserSlicePtrWriter,
Error, ScopeGuard,
};

use crate::{
Expand All @@ -16,7 +26,12 @@ use crate::{
DeliverToRead, Either,
};

struct TransactionInner {
file_list: List<Box<FileInfo>>,
}

pub(crate) struct Transaction {
inner: SpinLock<TransactionInner>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have a real need for a SpinLock here, as opposed to a Mutex? Are you locking this from a non-sleep context?
(Not suggesting it should be a Mutex - I just genuinely don't know)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need some synchronisation. Given that the hold times are small (just to swap a pointer), I chose a spinlock because it will just spin on contention as opposed to sleep. This is preferable because schedule() itself would take longer than the hold time of the lock, that is, sleeping is counterproductive on contention.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just to swap a pointer

Would AtomicPtr work?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just to swap a pointer

Would AtomicPtr work?

I considered it, but decided against it for two reasons: it doesn't work for pointers to ?Sized types, and I may want to add more fields to it inner eventually.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. Could be one of the parameters to be verified using the ping benchmark somewhere in the future?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, absolutely.

One "advantage" we have with binder is that we have the C implementation to benchmark against. As long as we're not too far off, we're good.

// TODO: Node should be released when the buffer is released.
node_ref: Option<NodeRef>,
stack_next: Option<Arc<Transaction>>,
Expand All @@ -37,13 +52,16 @@ impl Transaction {
stack_next: Option<Arc<Transaction>>,
from: &Arc<Thread>,
tr: &BinderTransactionData,
) -> BinderResult<Self> {
) -> BinderResult<Arc<Self>> {
let allow_fds = node_ref.node.flags & FLAT_BINDER_FLAG_ACCEPTS_FDS != 0;
let to = node_ref.node.owner.clone();
let alloc = from.copy_transaction_data(&to, tr, allow_fds)?;
let mut alloc = from.copy_transaction_data(&to, tr, allow_fds)?;
let data_address = alloc.ptr;
let file_list = alloc.take_file_list();
alloc.keep_alive();
Ok(Self {
let mut tr = Arc::try_new(Self {
// SAFETY: `spinlock_init` is called below.
inner: unsafe { SpinLock::new(TransactionInner { file_list }) },
node_ref: Some(node_ref),
stack_next,
from: from.clone(),
Expand All @@ -55,19 +73,28 @@ impl Transaction {
offsets_size: tr.offsets_size as _,
links: Links::new(),
free_allocation: AtomicBool::new(true),
})
})?;

let mut_tr = Arc::get_mut(&mut tr).ok_or(Error::EINVAL)?;

// SAFETY: `inner` is pinned behind `Arc`.
kernel::spinlock_init!(Pin::new_unchecked(&mut_tr.inner), "Transaction::inner");
Ok(tr)
}

pub(crate) fn new_reply(
from: &Arc<Thread>,
to: Ref<Process>,
tr: &BinderTransactionData,
allow_fds: bool,
) -> BinderResult<Self> {
let alloc = from.copy_transaction_data(&to, tr, allow_fds)?;
) -> BinderResult<Arc<Self>> {
let mut alloc = from.copy_transaction_data(&to, tr, allow_fds)?;
let data_address = alloc.ptr;
let file_list = alloc.take_file_list();
alloc.keep_alive();
Ok(Self {
let mut tr = Arc::try_new(Self {
// SAFETY: `spinlock_init` is called below.
inner: unsafe { SpinLock::new(TransactionInner { file_list }) },
node_ref: None,
stack_next: None,
from: from.clone(),
Expand All @@ -79,7 +106,13 @@ impl Transaction {
offsets_size: tr.offsets_size as _,
links: Links::new(),
free_allocation: AtomicBool::new(true),
})
})?;

let mut_tr = Arc::get_mut(&mut tr).ok_or(Error::EINVAL)?;

// SAFETY: `inner` is pinned behind `Arc`.
kernel::spinlock_init!(Pin::new_unchecked(&mut_tr.inner), "Transaction::inner");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still trying to get my head around Pinning. Is there a guarantee that Self will never move out of the Arc?
I'm guessing you either need an explicit guarantee (in the form of an # Invariant, might be hard to mentally enforce here) or return Pin<Arc<Self>> ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not guaranteed. At the moment it's an invariant of the project: I don't move objects behind Arcs.

Some background: in the beginning I had a bunch of TODO: This needs to be pinned in a lot of places, and a bunch of Pins that resulted in a bunch of unsafe blocks (it's always unsafe to get a mutable reference to a pinned type because the compiler needs us to manually ensure that we don't move anything). I removed them until we get a better handle on how this will look like.

Ok(tr)
}

/// Determines if the transaction is stacked on top of the given transaction.
Expand Down Expand Up @@ -136,6 +169,33 @@ impl Transaction {
process.push_work(self)
}
}

/// Prepares the file list for delivery to the caller.
fn prepare_file_list(&self) -> Result<List<Box<FileInfo>>> {
// Get list of files that are being transferred as part of the transaction.
let mut file_list = core::mem::replace(&mut self.inner.lock().file_list, List::new());

// If the list is non-empty, prepare the buffer.
if !file_list.is_empty() {
let alloc = self.to.buffer_get(self.data_address).ok_or(Error::ESRCH)?;
let cleanup = ScopeGuard::new(|| {
self.free_allocation.store(false, Ordering::Relaxed);
});

let mut it = file_list.cursor_front_mut();
while let Some(file_info) = it.current() {
let reservation = FileDescriptorReservation::new(bindings::O_CLOEXEC)?;
alloc.write(file_info.buffer_offset, &reservation.reserved_fd())?;
file_info.reservation = Some(reservation);
it.move_next();
}

alloc.keep_alive();
cleanup.dismiss();
}

Ok(file_list)
}
}

impl DeliverToRead for Transaction {
Expand All @@ -145,9 +205,19 @@ impl DeliverToRead for Transaction {
pub sender_euid: uid_t,
*/
let send_failed_reply = ScopeGuard::new(|| {
let reply = Either::Right(BR_FAILED_REPLY);
self.from.deliver_reply(reply, &self);
if self.node_ref.is_some() && self.flags & TF_ONE_WAY == 0 {
let reply = Either::Right(BR_FAILED_REPLY);
self.from.deliver_reply(reply, &self);
}
});
let mut file_list = if let Ok(list) = self.prepare_file_list() {
list
} else {
// On failure to process the list, we send a reply back to the sender and ignore the
// transaction on the recipient.
return Ok(true);
};

let mut tr = BinderTransactionData::default();

if let Some(nref) = &self.node_ref {
Expand All @@ -165,10 +235,6 @@ impl DeliverToRead for Transaction {
tr.data.ptr.offsets = (self.data_address + ptr_align(self.data_size)) as _;
}

// When `drop` is called, we don't want the allocation to be freed because it is now the
// user's reponsibility to free it.
self.free_allocation.store(false, Ordering::Relaxed);

let code = if self.node_ref.is_none() {
BR_REPLY
} else {
Expand All @@ -183,6 +249,27 @@ impl DeliverToRead for Transaction {
// here on out.
send_failed_reply.dismiss();

// Commit all files.
{
let mut it = file_list.cursor_front_mut();
while let Some(file_info) = it.current() {
if let Some(reservation) = file_info.reservation.take() {
if let Some(file) = file_info.file.take() {
reservation.commit(file);
}
}

it.move_next();
}
}

// When `drop` is called, we don't want the allocation to be freed because it is now the
// user's reponsibility to free it.
//
// `drop` is guaranteed to see this relaxed store because `Arc` guarantess that everything
// that happens when an object is referenced happens-before the eventual `drop`.
self.free_allocation.store(false, Ordering::Relaxed);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If using Ordering::Relaxed, is the following scenario possible?

  1. a thread calls do_work() and free_allocation is set to false
  2. another thread calls Drop, doesn't "see" that free_allocation is false yet, frees the allocation
  3. user frees the allocation again?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not possible because Arc prevents that. In particular, all ref count decrements have release semantics, which are paired with an acquire right before drop. See https://doc.rust-lang.org/src/alloc/sync.rs.html#1540.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That definitely sounds legit :) It may be hard to know for most people looking at the code? Maybe document this in the code at some point in the future? Idk, I'm sure you've got more important things on your plate in Binder.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point. I'll add a comment that describes this.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This got me thinking. Currently in kernel C, there are lots of variables that convey information from one thread to another - like free_allocation here. But many of them don't need to be protected by a synchronization primitive. That's because many kernel calls issue explicit fences. E.g. when you're waiting on a flag with wait_event_interruptible(), that function is guaranteed to have issued a fence when it returns, so the int or bool flag you're waiting for can never be out of date.

But in Rust, we can't make that optimization? We always need the non-zero cost synchronization primitive, even if we can logically deduce things will work across threads?

Anyways, this is too optimize-ey to worry about at this point IMHO.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

load/store with Ordering::Relaxed doesn't have overhead though. There is a nice article about what's actually generated: https://www.cl.cam.ac.uk/~pes20/cpp/cpp0xmappings.html. load/store with Ordering::Relaxed are basically just normal load/store with additional tear-free guarantee (even though normal load/store are de facto tear-free).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting!! Is Relaxed equivalent to a normal access in a structure?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting!! Is Relaxed equivalent to a normal access in a structure?

If you only access it once :)

If you access an atomic integer multiple times with Relaxed ordering the compiler will need to generate multiple load instructions (since its value may change), while for a non-atomic integer Rust compiler is free to assume that value wouldn't change (even if there is a function-call in between two accesses).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's extremely interesting, thank you Gary !

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

load/store with Ordering::Relaxed doesn't have overhead though.

While this is generally true, it isn't always true. Even on architectures that offer non-tearing word-sized ops, if the type is bigger than the word size, we may need more expensive instructions. For example, on 32-bit x86, if you try to atomically read/write a 64-bit integer, it won't be a regular load/store.

But to Sven's original point, I tend to agree that Rust imposes restrictions that may make the code less efficient than C. We need to look into those carefully as I think a lot of them can be exposed safely in Rust, however, some of them are inherently unsafe (i.e., they rely on the developer following rules that cannot be enforced by the compiler) -- for these I think we'll have to do as discussed in another bug/issue, provide a combination of: safe alternative with the extra cost as the price for safety, and/or an unsafe optimal alternative. If we offer both, we let the developer choose according to their needs.


// When this is not a reply and not an async transaction, update `current_transaction`. If
// it's a reply, `current_transaction` has already been updated appropriately.
if self.node_ref.is_some() && tr.flags & TF_ONE_WAY == 0 {
Expand All @@ -209,3 +296,35 @@ impl Drop for Transaction {
}
}
}

pub(crate) struct FileInfo {
links: Links<FileInfo>,

/// The file for which a descriptor will be created in the recipient process.
file: Option<File>,

/// The file descriptor reservation on the recipient process.
reservation: Option<FileDescriptorReservation>,

/// The offset in the buffer where the file descriptor is stored.
buffer_offset: usize,
}

impl FileInfo {
pub(crate) fn new(file: File, buffer_offset: usize) -> Self {
Self {
file: Some(file),
reservation: None,
buffer_offset,
links: Links::new(),
}
}
}

impl GetLinks for FileInfo {
type EntryType = Self;

fn get_links(data: &Self::EntryType) -> &Links<Self::EntryType> {
&data.links
}
}