G. Casadesus Vilaa, J.A. Ruiz-de-Azuab, E. AlarconcAbstractThe upcoming landscape of Earth Observation missions will defined by networked heterogeneous nanosatellite constellations required to meet strict mission requirements, such as revisit times and spatial resolution. However, scheduling satellite communications in these satellite networks through efficiently creating a global satellite Contact Plan (CP) is a complex task, with current solutions requiring ground-based coordination or being limited by onboard computational resources. The paper proposes a novel approach to overcome these challenges by modeling the constellations and CP as dynamic networks and employing graph-based techniques. The proposed method utilizes a state-of-the-art dynamic graph neural network to evaluate the performance of a given CP and update it using a heuristic algorithm based on simulated annealing. The trained neural network can predict the network delay with a mean absolute error of 3.6 minutes. Simulation results show that the proposed method can successfully design a contact plan for large satellite networks, improving the delay by 29.1%, similar to a traditional approach, while performing the objective evaluations 20x faster.
The selected DGNN model to learn the objective function F is EvolveGCN . One of its main features is handling the addition and removal of nodes after training, overcoming the limitation of other methods in learning these irregular behaviors. The authors propose using a Recurrent Neural Network (RNN) to regulate a Graph Convolutional Network (GCN) model (i.e., network parameters) at different time steps. This approach effectively performs what is known as model adaptation by focusing on the model itself rather than the node embeddings. Therefore, the change of nodes poses no restriction, making the model sensitive to new nodes without historical information. The central component of the model is the update of the weight function W (l) at each time step. The t weights are updated by an rnnRNN that takes as input the node embedding matrix H (l) and the previous t weight matrix W (l) t1 and outputs the updated weight matrix W (l) as t W (l) t = RNN (cid:16)
id: e8dd5ababf055a0b0e5ccd030ee71d30 - page: 4
H (l) t , W (l) t1 (cid:17) Combining the graph convolution with the recurrent architecture, the authors define the evolving graph convolution unit, which is the basic building block of the EvolveGCN model. (8) As in the original papers nomenclature , we will use subscript t to denote the time index, superscript l to denote the GCN layer index, n for the number of nodesin our case n = Ns + Ng. At a time step t, the input data to the model consists of the pair (At Rnn, Xt Rnd), where the first element is the graph adjacency matrixin our case, obtained from the contact plan Utand the second is the matrix of input node features. Specifically, each row of Xt is a vector of node d features.
id: c86be50278eaf90cce6f8976258047a8 - page: 4
Since we assume the satellites and ground stations that each satellite communicates with, Ci, as known, for each time step, we create link embeddings by concatenating the embeddings of the source and destination nodes. That is, we concatenate the embeddings of , . . . , H (L) the final layer at different time steps, H (L) , Nt to compute the objective function F using a fully connected neural network with a single output and average pooling. 1 4. Results The GCN consists of L layers of graph convolution, which includes a neighborhood aggregation that combines information from neighboring nodes. At time t, the l-th layer takes as input the adjacency matrix At and the learnable node embedding matrix H (l) , and In this section, we first present the experimental setup, including the parameters of the DGNN model and training. Then, we present the results of the contact plan design based on simulated annealing using a DGNN for evaluating the objective. t IAC-23-B2.2.2
id: 9f9c8a37f24723894a3ec98b2076d257 - page: 4
Page 4 of 8 73rd International Astronautical Congress (IAC), Baku, Azerbaijan, 2-6 October 2023. Copyright 2023 by the authors Fig. 2: Training and evaluation loss. The model is trained for 16 hours using synthetic data corresponding to 30 satellites and 20 ground stations. Hyperparameters are selected using a grid search, including different activation and loss functions, as well as the number of layers and sizes. Fig. 3: Predicted and true normalized BDT for 100 different contact plans. The model successfully identifies contact plans with worse objective values and achieves lower accuracies on contact plans with lower objectives. It predicts the BDT of a contact plan with a mean absolute error of 3.6 minutes.
id: 52b1abe913ba695f5bef904af04ed126 - page: 4