Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Basic system Initalize: #1

Closed
5 tasks done
cameled opened this issue Oct 14, 2021 · 5 comments
Closed
5 tasks done

Basic system Initalize: #1

cameled opened this issue Oct 14, 2021 · 5 comments
Assignees
Labels
Milestone

Comments

@cameled
Copy link
Collaborator

cameled commented Oct 14, 2021

  • zephyr modules add GD32F4xx_Firmware_Library
  • add gigadevice gd32f4xx soc
  • USART/UART
  • add gigadevice gd32f405x boards
  • gd32f405x support hello_world sample
@cameled cameled self-assigned this Oct 14, 2021
@cameled cameled added the RFC label Oct 14, 2021
@cameled cameled added this to the alpha milestone Oct 14, 2021
@cameled
Copy link
Collaborator Author

cameled commented Oct 14, 2021

Merge @nandojve work ARM: Introduce gigadevice gd32f403 in here. If his PR have been merged, we need update our GD32F4xx Module again.
zephyr: 74df681
hal_gigadevice: BrainCoTech/hal_gigadevice@fca423b

@nandojve
Copy link

Hello @cameled ,

Nice to see that your team is looking to use GD32 with Zephyr. We have been working to enable it and I hope we can help each other. The RFC can be read at zephyrproject-rtos#38657.

Until we get zephyrproject-rtos#38661 in main tree there could be some changes.

I would recommend you look at zephyrproject-rtos/hal_gigadevice#1 to understand yaml files from your SoC version. It will be necessary to enable pinctrl on it.

I encourage your team to open PRs to upstream code.

@cameled
Copy link
Collaborator Author

cameled commented Oct 19, 2021

Hi @nandojve

We have the plan to contribute GD32F405/407 and GD32F450 SoCs and GD32F450Z-EVAL board support to Zephyr project.

@nandojve
Copy link

Yes, but I may have access to a GD32F450-EVAL at principle. I know that more people is interested on GD32F450 and there will be more people involved. I can help with GD32F405/407 reviews. Since GD32F405/407 and GD32F450 uses same firmware API there is no issues to test code only with GD32F450 for the upstream.

@cameled
Copy link
Collaborator Author

cameled commented Oct 28, 2021

Done with branch gd32f405.

@cameled cameled closed this as completed Oct 28, 2021
cameled pushed a commit that referenced this issue Dec 13, 2022
This patch reworks how fragments are handled in the net_buf
infrastructure.

In particular, it removes the union around the node and frags members in
the main net_buf structure. This is done so that both can be used at the
same time, at a cost of 4 bytes per net_buf instance.
This implies that the layout of net_buf instances changes whenever being
inserted into a queue (fifo or lifo) or a linked list (slist).

Until now, this is what happened when enqueueing a net_buf with frags in
a queue or linked list:

1.1 Before enqueueing:

 +--------+      +--------+      +--------+
 |#1  node|\     |#2  node|\     |#3  node|\
 |        | \    |        | \    |        | \
 | frags  |------| frags  |------| frags  |------NULL
 +--------+      +--------+      +--------+

net_buf #1 has 2 fragments, net_bufs #2 and #3. Both the node and frags
pointers (they are the same, since they are unioned) point to the next
fragment.

1.2 After enqueueing:

 +--------+      +--------+      +--------+      +--------+      +--------+
 |q/slist |------|#1  node|------|#2  node|------|#3  node|------|q/slist |
 |node    |      | *flag  | /    | *flag  | /    |        | /    |node    |
 |        |      | frags  |/     | frags  |/     | frags  |/     |        |
 +--------+      +--------+      +--------+      +--------+      +--------+

When enqueing a net_buf (in this case #1) that contains fragments, the
current net_buf implementation actually enqueues all the fragments (in
this case #2 and #3) as actual queue/slist items, since node and frags
are one and the same in memory. This makes the enqueuing operation
expensive and it makes it impossible to atomically dequeue. The `*flag`
notation here means that the `flags` member has been set to
`NET_BUF_FRAGS` in order to be able to reconstruct the frags pointers
when dequeuing.

After this patch, the layout changes considerably:

2.1 Before enqueueing:

 +--------+       +--------+       +--------+
 |#1  node|--NULL |#2  node|--NULL |#3  node|--NULL
 |        |       |        |       |        |
 | frags  |-------| frags  |-------| frags  |------NULL
 +--------+       +--------+       +--------+

This is very similar to 1.1, except that now node and frags are
different pointers, so node is just set to NULL.

2.2 After enqueueing:

 +--------+       +--------+       +--------+
 |q/slist |-------|#1  node|-------|q/slist |
 |node    |       |        |       |node    |
 |        |       | frags  |       |        |
 +--------+       +--------+       +--------+
                      |            +--------+       +--------+
                      |            |#2  node|--NULL |#3  node|--NULL
                      |            |        |       |        |
                      +------------| frags  |-------| frags  |------NULL
                                   +--------+       +--------+

When enqueuing net_buf #1, now we only enqueue that very item, instead
of enqueing the frags as well, since now node and frags are separate
pointers. This simplifies the operation and makes it atomic.

Resolves zephyrproject-rtos#52718.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants