Flexible data processing solutions for space missions Monolithic data processing centres handle the entire data set of a space mission, from data receipt up to the read-for-science results. They have the advantage that they combine all the expertise and resources physically in one place, reducing the organisational overhead. On the other hand, such structures are not flexible to changes in the architecture and time line of a space mission, the hardware is often outdated by the time the mission provides its first data, and the infrastructure might be of no use after the completion of the mission. At the François Arago Centre at the APC in Paris we study various forms of alternative approaches, such as distributed processing through Grid and Cloud infrastructure, employing small scale clusters and High-Throughput Computing centres. Distributed processing can significantly reduce the costs in the preparation and prototyping of the analysis pipelines and during the production phase. In addition, combining small, local computing centres with super-computing facilities gives the opportunity to provide expertise and development support of small groups to large data centres which are less flexible in their hardware and infrastructure development and to Grid and Cloud where middle wares induce an adaptation phase for the users (especially on the Grid). Nevertheless, in view of upcoming space missions with PByte/year scale data production, central computing centres with mass data storage facilities and computing farms with more than 1000 CPU remain necessary in order to minimize data transfer and time loss on input/output operations. Here we discuss studies we performed using local clusters, the super-computing (HTC) facility in Lyon, the Grid, and Cloud environment. Data processing for space missions can make use of any of those approaches, and solutions have to be individual depending on data rate, processing and storage need, as well as on the structure of the collaboration. Flexibility appears to be a key aspect to finding the optimum data handling and processing architecture.