Skip to content

Conversation

@saeid-a
Copy link
Contributor

@saeid-a saeid-a commented Jul 24, 2023

I have completed the initial implementation of the raft transport. Can you review it and let me know if there are any issues or if it looks good? Thank you.

@saeid-a saeid-a requested review from qishenonly and sjcsjc123 July 24, 2023 05:58
@saeid-a saeid-a force-pushed the feat/raft-transport-comp-#180 branch from b4f5d85 to aa3c3cf Compare July 24, 2023 06:18
Copy link
Member

@sjcsjc123 sjcsjc123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your efficient work. I need to carefully analyze it!

// and receive rpc from other raft nodes
type transport struct {
type Transport struct {
//implement me
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps this comment can be deleted.

@sjcsjc123
Copy link
Member

Can you directly submit the code to the branch of the main project ByteStorage/FlyDB in the later stage, so that we can switch branches for viewing, which will be more convenient!

@sjcsjc123
Copy link
Member

If all the components of the raft are implemented, we can test whether they can be used normally in this way:

  1. Start three store nodes through grpc and create one new raft node each

  2. Obtain the number of peers in the store cluster and the leader of cluster, observing if they can be successfully obtained

  3. Close one of the store nodes

  4. Observe again whether the number of peers in the store cluster and the leader of cluster can be successfully obtained

  5. Restart the recently closed store node

  6. Observe whether the number of peers in the store cluster has returned to normal

Can you share your opinion?

func newTransport() raft.Transport {
return &transport{}
func newTransport(conf config.Config, l net.Listener, do []grpc.DialOption) (*Transport, error) {
s := grpc.NewServer()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our current design involves a raft cluster with multiple regions on a store node, so the grpc port of a store node can be shared by the region list. Creating a new grpc server here may not be in line with the design objectives. The ultimate goal we aim to achieve is for the Meta cluster to manage the scheduling and allocation of the store cluster and the region cluster on the store. Among them, the meta cluster serves as a raft cluster to ensure strong consistency. At the same time, the region on the store and the region shards on other stores also form a raft cluster. As a store may contain multiple regions, there will be multiple raft nodes on one store at the same time. This is my design proposal. If you feel that it is not suitable or have any questions, you can share it! For this NewServer, we can create a new grpc server when starting the store and pass it as a parameter to the transport component. Additionally, a grpc server can register multiple work servers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants